mattsretechblog: matt cohen (Default)
2018-10-12 12:00 am
Entry tags:

Security Auditing compared with Penetration Testing

Recently, while we were discussing a contract, an industry executive needed me to give an explanation of the difference between a “security audit” and a “penetration test.” The party with whom the executive was negotiating the contract had changed the contractual requirement from the one to the other. Since this might be of interest to others, I’ll provide the explanation here on the blog.

The short explanation is that a “penetration test” is just one small component of a “security audit,” and that the software / service provider was aiming for a much lower bar than we were aiming for on behalf of the executive and her MLS. Following is an explanation of how much lower it is.

A penetration test is an attempt to find weaknesses in the defenses in a computer system’s security. Sometimes the term is used in reference to non-computer security, but typically not. The test usually consists of a combination of automated and manual testing to find specific attacks that will enable the attacker to bypass certain defenses, then working to find additional vulnerabilities that can be found only once those initial defenses are defeated. As a result of this testing, risk can theoretically be assessed and a remediation plan put in place. One can also evaluate how well an organization detects and responds to an attack.

One significant problem with penetration testing is that, if the system being tested has good outer defenses, a penetration test may not find security risks lurking inside of the defenses. It may then present as its finding that risk is low and no action is needed. Then when there is a change to the outer defense that causes a vulnerability (or a vulnerability is discovered in that outer defense) actual attackers will breach all of the inner layers where security has not been well designed. It’s a well-known principle of security design that one designs multiple levels of security (a/k/a “defense in depth”). Penetration testing, on its own, doesn’t reliably measure whether this has been done properly.

For the more technical readers, I will provide an example:

Let’s say a web application has been written with excellent protections against SQL database injection attacks. The penetration testing is run against the application, no SQL injection issues are found, and the system passes with flying colors. But, let’s say the web application had full database server privileges (a “db_owner” role for MS SQL or a DBA or user with too many global privileges on MySQL), and that the database platform service had full system privileges (an “Administrator” role user on Windows or “root” on Linux). Or let’s say that someone didn’t have mySQL’s “Outfile” disabled along with other issues allowing remote file access. Then, one day, a programmer makes a single mistake (that never happens, right?) and all of the poor configuration behind the web application is exposed and a hacker can easily grab database contents and take over the database server or even the whole network. I’ve taken an attack exactly that far – from the login prompt on the outside of an application – during a sanctioned security audit, of course!

The other problem with using a penetration test as the sole way to measure security is that information security is a much broader area of exploration, normally measured during a full security audit. Most of the breaches we’ve had in this industry have been the result of weak policy and procedure or the contracts that reflect those policies, inadequate human resources practices, and physical security issues. Still other technical security issues have resulted from a lack of protection against screen scraping, and yet others from authentication related issues – both items that most penetration testing tools cannot easily uncover. Looking at the security configuration of network equipment, servers and workstation operating systems, platforms, and installed software, mobile devices, antivirus, printer and copier configuration (yes, I really said copiers!), password selection, backup practices and so much more – all of that falls outside the purview of typical penetration testing and is only reliably addressed via a full security audit.

There’s nothing wrong with using a penetration test as a part of a security audit – just don’t mistake the part for the whole.
mattsretechblog: matt cohen (Default)
2017-07-05 12:00 am
Entry tags:

Preventing Screen Scraping: Policy, Contracts and Technology Evaluation

When organizations create policy requiring screen-scraping and other automated attack prevention and monitoring, it’s important for those organizations to be specific enough to ensure that compliance with policy can be measured in some way.  Indeed, it is equally important for the organization to ensure that their technology contracts contain clear and explicit terms that implement those policies.

If policies and contracts do not contain specific anti-scraping technology requirements, one can easily end up in an argument over whether the steps taken to prevent scraping are sufficient, even if those steps are demonstrably ineffective.  For example, a website provider might implement a
“CAPTCHA” on login and say, “This should be enough to prove the humanity of the user.  It’s not a computer program using the website.”  But, not only are many CAPTCHA tests easy for computers to defeat (it’s an arm’s race!), but if all a data pirate needs to do is have a human being log in and/or complete a CAPTCHA test once per day and have the cookie (containing session information) captured by a computer for use in scraping data, it’s not a very high barrier.  Likewise, an anti-scraping solution might block an IP address as being used by a scraper if the website gets more than 20 or 30 information requests per minute from that address – and while that seems like a reasonable step, these days the more advanced scrapers spin up a hundred servers on different IP addresses and have each of them grab the data from just a few pages, then move those servers to different IP addresses.  Thus, anti-scraping is difficult, and while the mechanisms mentioned above might play a part in a solution, one must include a more comprehensive solution if one wishes to actually have a reasonable chance of stopping the screen scrapers.  Moreover, that more comprehensive solution should be detailed explicitly both in the terms of any contracts executed by the defending organization, as well as in any policies implemented by them regarding reasonable security measures.

Anti-scraping requirements might look something like the following:

The display (website, app’s API) must implement technology that prevents, detects, and proactively mitigates scraping. This means implementing an effective combination of the countermeasures defined in the “OWASP Automated Threat Handbook” “Figure 7: Automated Threat Countermeasure Classes” (reproduced below and available at
https://www.owasp.org/index.php/OWASP_Automated_Threats_to_Web_Applications). Those countermeasures must be demonstrably effective against commercial scraping services as well as advanced and evolving scraping techniques.

The anti-scraping solution must be comprised of multiple countermeasures in all three classes of countermeasures (prevent, detect, and recover) as defined by OWASP, sufficient to address all aspects of the security threat model, including at least complete implementations of all of the following: Fingerprinting, Reputation, Rate, Monitoring, and Instrumentation.

Those fielding displays and APIs requiring anti-scraping technology must demonstrate compliance with the above requirements using technology they have built or a commercial product/service. It must be demonstrated that the technology meets those requirements and that it has been properly configured to effectively address scraping.

Following is some more detail about each countermeasure:

Countermeasure

Brief Description

Prevent

Detect

Recover (Mitigate)

Requirements

Define relevant threats and assess effects on site’s performance toward business objectives

X

X

X

Obfuscation

Hide assets, add overhead to screen scraping and hinder theft of assets

X

 

 

Fingerprinting

Identify automated usage by user agent string, HTTP request format, and/or devices fingerprint content

X

X

 

Reputation

Use reputation analysis of user identity, user behavior, resources accessed, not accessed, or repeatedly accessed

X

X

 

Rate

Limit number and/or rate of usage per user, IP address/range, device ID / fingerprint, etc.

X

X

 

Monitoring

Monitor errors, anomalies, function usage/sequencing, and provide alerting and/or monitoring dashboard

 

X

 

Instrumentation

Perform real-time attack detection and automated response

X

X

X

Response

Use incident data to feed back into adjustments to countermeasures (e.g. requirements, testing, monitoring)

X

 

X

Sharing

Share fingerprints and bot detection signals across infrastructure and clients

X

X

 


The appendix that follows provides more thorough OWASP countermeasure definitions.

Appendix: OWASP Countermeasure Definitions

The following is excerpted from “OWASP Automated Threat Handbook Web Applications”:

•    Requirements. Identify relevant automated threats in security risk assessment and assess effects of alternative countermeasures on functionality usability and accessibility. Use this to then define additional application development and deployment requirements.
•    Obfuscation. Hinder automated attacks by dynamically changing URLs, field names and content, or limiting access to indexing data, or adding extra headers/fields dynamically, or converting data into images, or adding page and session-specific tokens.
•    Fingerprinting. Identification and restriction of automated usage by automation identification techniques, including utilization of user agent string, and/or HTTP request format (e.g. header ordering), and/or HTTP header anomalies (e.g. HTTP protocol, header inconsistencies), dynamic injections, and/or device fingerprint content to determine whether a user is likely to be a human or not. As a result of these countermeasures, for example, browsers automated via tools such as Selenium must certainly be blocked. The technology should use machine learning or behavioral analysis utilized to detect automation patterns and adapt to the evolving threat on an ongoing basis.
•    Reputation. Identification and restriction of automated usage by utilizing reputation analysis of user identity (e.g. web browser fingerprint, device fingerprint, username, session, IP address/range/geolocation), and/or user behavior (e.g. previous site, entry point, time of day, rate of requests, rate of new session generation, paths through application), and/or types of resources accessed (e.g. static vs dynamic, invisible/ hidden links, robots.txt file, paths excluded in robots.txt, honey trap resources, cache-defined resources), and/or types of resources not accessed (e.g. JavaScript generated links), and/ or types of resources repeatedly accessed. As a result of these countermeasures, for example, known commercial scraping tools and the use of data center IP addresses must certainly be identified and blocked.
•    Rate. Set upper and/or lower limits and/or trend thresholds, and limit number and/or rate of usage per user, per group of users, per IP address/range, and per device ID/fingerprint.  Note that this kind of countermeasure cannot stand alone as hackers commonly utilize a slow crawl from many rotating IP addresses that can simulate the activity of legitimate users. Monitoring. Monitor errors, anomalies, function usage/sequencing, and provide alerting and/or monitoring dashboard.
•    Instrumentation. Build in application-wide instrumentation to perform real-time attack detection and automated response including locking users out, blocking, delaying, changing behavior, altering capacity/capability, enhanced identity authentication, CAPTCHA, penalty box, or other technique needed to ensure that automated attacks are unsuccessful. Response. Define actions in an incident response plan for various automated attack scenarios. Consider automated responses once an attack is detected. Consider using actual incident data to feed back into other countermeasures (e.g. Requirements, Testing, Monitoring).
•    Sharing. Share information about automated attacks, such as IP addresses or known violator device fingerprints, with others in same sector, with trade organizations, and with national CERTs.





mattsretechblog: matt cohen (Default)
2016-09-01 12:00 am
Entry tags:

Reducing the Risk of Real Estate Wire Fraud

The Groundbreaking Definitive Paper on the Subject

Ongoing bad publicity related to real estate transaction wire fraud threatens to damage the reputation of the real estate industry. Previous issue descriptions have been incomplete and have also not provided thorough guidance for those brokerages, title companies, attorneys, and others that engage internally in the wiring of funds or interact with the consumer regarding wire transfers. This paper provides information relevant not just for those directly establishing and communicating wiring instructions, but also for those just acting in an advisory role to the consumer during the transaction.

The paper includes best practices – sample procedures and communications gathered from franchises, brokers, title companies and others. It also includes a sample company policy Clareity Consulting has created based on both these best practices and on our experience advising brokerages and other companies regarding information security. Not all aspects of this paper will apply to all companies involved in the transaction, and what is relevant will still need to be tailored to an individual company’s business processes. Still, it provides a valuable starting place toward reducing the risk of wire fraud.

Please download the paper here:
Reducing the Risk of Real Estate Wire Fraud

mattsretechblog: matt cohen (Default)
2015-07-07 12:00 am
Entry tags:

NAR’s D.A.N.G.E.R. Report – Focusing on the Security Risk

Understanding the Risk and How to Address It

In the D.A.N.G.E.R. Report, compiled for NAR by Stefan Swanepoel, a “security breach” is listed as a “high risk” for MLSs, though it is quite clear that the risk is to the industry more generally. According to Stefan, “Cyber criminals could attack the industry, breach the MLS, and cause disruption … as transaction management systems and mortgage systems are added or integrated, this threat becomes more serious.”

Only some leaders in our industry are taking this risk – and their responsibility – seriously. Everyone knows how important installing security updates (“patching” or “upgrading”) is, but between 28% and 46%, depending on industry segment, don’t take even that basic, no-cost step for their web servers, even though their organization’s websites are often a gateway for services for many stakeholders. Following is a chart of different industry segments, showing what percent were running unpatched servers with known vulnerabilities as of January 2015:


We’ve got to do better than that. So, where does the industry need to focus its attention?

Security Assessment

Security risk assessment is the first step. Clareity Consulting has been helping clients with security risk assessments for the last thirteen years. Organizational security is not something executives should just be handing off to their technical staff, since many of the most important and foundational areas of security are non-technical. Vetting new staff and training them in how to deal with data securely is critical. Writing good policy and procedure documents makes it clear to employees, including both technical and non-technical employees, what their responsibilities are in securely operating your business. Detailed contracts extend these responsibilities to those who provide you with services, and to independent contractors. Also, physical security is extremely important – everything from making sure access is granted only to authorized people to making sure that sensitive information is handled properly and disposed of both timely and securely. Areas of more technical evaluation include routers and firewalls, wireless devices, virtualization environments, websites and applications, installed software, use of encryption, server and workstation operating systems, mobile device configurations, printers, copiers, and much more.

Payment Card Industry (PCI) Data Security Standard (DSS) compliance is also necessary in order for your business to process credit card information. The latest Payment Card Industry Data Security Standards (PCI DSS) now require compliance from organizations whose websites redirect their e-commerce transaction-processing functions to a third party, even though these organizations are outsourcing their credit card functions to that third party.

Having an in-depth knowledge of how information flows in our industry as well as both the business and technical aspects of electronic transactions enables Clareity Consulting to help organizations with this kind of risk assessment most efficiently.

Authentication

The D.A.N.G.E.R. Report focuses on login authentication applied to MLSs and integrated systems such as transaction management and document management. Of course, only weak authentication is in place for broker systems that are also integrated with such systems, and for the direct logins to those systems. Only about half the industry has implemented strong authentication for the MLS and integrated products and, in some cases, even those that have implemented strong authentication for the MLS have not required that all licensed and/or integrated software integrate the stronger measures.

Our industry has a complicated authentication problem to solve. Most login security solutions are not designed to work when the users actively wish to share their account. When mobile professionals have unusual computer use patterns and share computers in a “bullpen” situation, it is even more difficult to provide an effective security system. Even when an MLS has extensive fines for account sharing, we have found that subscribers are quite willing to try to share their login account with non-subscribing professionals, consumers, and others. Most MLSs prefer an authentication solution that focuses on stopping long-term account sharing to one that may cause some inconvenience to users but more reliably prevents an unauthorized user from using account information to access an account even once.

Clareity Security, the leading provider of strong authentication to the industry, has an amazing array of technology that can be used to provide more proactive security, including an exciting new biometric method for mobile devices that is patent pending. Even the current technology Clareity Security has in place at MLSs can be configured to be more stringent, but the industry has to have the will to implement it in that way. Hopefully the industry will move in that direction before there is a publicized security breach.

Clareity Security has always recognized the need to balance cost and convenience with security. That is why Clareity Security has built a RISK based scoring system that has been fully customized to real estate use cases, ensuring that when strong authentication is necessary based on risky behavior and/or application sensitivity, it can be implemented.

Clareity Security also leads the industry in the best practice implementation of secure single sign-on (SSO). Using the SAML (Secure Assertion Markup Language) standard, the company has provided a way for hundreds of applications to be SSO integrated while still maintaining good security. I have also seen some very bad implementations of SSO that create tremendous risk for the organization because they have little to no security.

There are new, emerging authentication challenges as well. For example, it is very difficult to protect authentication to APIs, especially those that are designed to be used by mobile applications. This is an issue that I presented at a RESO meeting in 2013 and, to date, only limited protective mechanisms have been developed by some vendors.

[UPDATE: Clareity was acquired by CoreLogic in 2017 and both authentication solutions and professional services are provided under that brand.]

Screen Scraping

Though consumers are providing their agents with information solely for the purpose of marketing their property for sale, it’s very difficult to ensure real estate information is being used for only that legitimate purpose, since our industry sends the content to so many locations online. One of the biggest challenges is “screen scraping”, when someone copies large amounts of data from a web site – manually or with a script or program. There are two kinds of scraping, “legitimate scraping” such as search engine robots that index the web, and “malicious scraping” where someone engages in systematic theft of intellectual property in the form of data accessible on a website. Malicious scraping has been a severe problem in the real estate industry for a decade or more. Bad actors and their software “bots” can grab MLS data and resell it or use it for their own purposes. The very largest sites, such as Realtor.com, have invested millions in anti-scraping solutions, and these solutions have to be constantly updated as scrapers become ever more sophisticated. However, as some sites take steps to protect the content, that pushes the scrapers out to harvest content from other sites that are less well protected. For our industry, Clareity Consulting recommends a company it has worked with (and is currently consulting for), Distil Networks, that provides a robust and scalable technology solution at a cost that even an individual agent can afford for their site. Distil Network’s solution has so far been adopted by one MLS vendor, some IDX vendors, and some individual VOW sites. While some advertising portals have implemented anti-scraping measures as well, the industry has a long way to go to protect its content both in the MLS system and its public facing modules (client collaboration, framed IDX) and many of the other locations to which content has been syndicated.

[UPDATE: In the years since this article was written there other good anti-scraping options have emerged such as Incapsula and Akamai Bot Manager.]

Next Steps

Information security is not something where one can “set it and forget it”. New challenges emerge regularly. The policies and procedures, contract terms, and even the security audit tools I used even five years ago have changed radically – they are always changing. We’ve got to get both authentication and screen scraping under control. If you’re an executive in this industry, whether of an association, MLS, broker, or software provider, it’s time to move faster and press harder on security. Start with assessment. I can help with that. Then we can figure out the next steps for your organization. Just remember, security isn’t something you achieve: it’s an ongoing practice.

mattsretechblog: matt cohen (Default)
2015-04-27 12:00 am
Entry tags:

Security Assessment: Demystifying the Process

Don’t Be Afraid of the ‘Audit’

Information security laws are still a patchwork nationwide, but an increasing number of industry organizations are finding that they need our assistance to comply with laws and other applicable requirements. The purpose of this post is to help demystify the process.

Many industry organizations are just realizing all of the requirements that apply to them. For example, the latest Payment Card Industry Data Security Standards (PCI DSS) now require compliance even from organizations that outsource their e-commerce transaction-processing functions to a third-party service provider, but where the organization’s website controls the redirection to that provider. Understanding those types of compliance requirements, as well as those mandated by federal, state. or provincial legislation, is a starting point for any assessment.

A lot of clients take comfort in my approach to assessing security. For many organizations, especially for the IT people, a security assessment can seem daunting. What if issues are found? Will we look bad in front of the assessor? All I can do is assure clients that I’m there to help, not to judge. Yes, typically issues are found, especially if it’s an organization’s first assessment or if they haven’t had an assessment in a few years. Finding issues is simply the first step toward addressing them. Some assessors spend most of their time by themselves using their assessment tools, then presenting a report that feels like an adversarial “Gotcha!” to clients. I take a very different approach, one where I’m working alongside my client, using tools and checklists in collaboration with them. This has some important benefits. Without there being any surprises at the end of the assessment, there’s less of an adversarial feeling. Also, by educating my client in the use of common and free (or inexpensive) assessment tools and other security resources, they become empowered. IT staff are left feeling more educated and valuable, rather than feeling defeated by an outsider finding issues. The best part is that, by empowering my clients to be able to perform at least some level of ongoing self-assessment, they are more likely to maintain better security in the long run.

Before the visit: Typically, I schedule two to three days for a visit with my client. A few weeks before the visit, I ask for any security-related information the organization might have: a list of websites and apps, information security policies (usually a part of an employee handbook), a list of third-party service providers and parts of contracts that are relevant to security, and the office and data center internet address (IP) ranges. If some of that information isn’t available, that’s okay. If I’m going to do any testing of applications hosted by third parties, at that point I need my client to coordinate that testing with their service provider. Then I review the materials provided and perform some initial “external” testing prior to my visit. If assistance with PCI DSS compliance is requested, I work with my client to start that process as well.

During the visit: I like to start discussions with management – looking together at staffing practices, physical security, policy and procedure, contracts, and other less-technical aspects of security. Then I dive into the technology with the staff (or sometimes contractors) who are responsible for managing it. Together, we’ll look at everything from routers and firewalls all the way down to the operating systems, and everything between. If PCI DSS compliance is in progress, we will review any outstanding questions my client needs assistance with. At the end of the visit, if everyone is available, I like to bring everyone involved together to discuss findings and the process of planning issue remediation.

After the visit: Sometimes there are subsequent discussions of findings after the visit. Also, I provide a lot of phone and email support and follow up, to ensure that the organization is efficiently moving forward in their efforts to improve security and to answer questions that arise along the way.

Hopefully this post has demystified the security assessment process. When it comes to information security, our industry has a lot of work to do. I benchmark the industry regularly in a number of ways. One small measure I take is “What percent of websites are running on known insecure web server platforms?” My present benchmark for that measure is: 28% of the top 50 MLSs (by subscriber count), 46% of the top 50 brokers (by transaction volume), 40% of top local associations, and 35% of our state associations. That measure is just the tip of the iceberg, too –again, there’s a lot of work to do! Contact me (612-747-5976), and let’s start working on it together.

mattsretechblog: matt cohen (Default)
2014-10-30 12:00 am
Entry tags:

Security and the Great Flywheel of Convenience

Security Lets Us Provide a Better User Experience

A Ten-Year Journey

In James C. Collins’ bestselling business book, “Good to Great”, he introduces the concept of the flywheel. Outside of business, a flywheel is a heavy wheel that, once momentum is built, can store and provide consistently great power to a machine even when regular power is interrupted. The tricky thing about a heavy flywheel is that it can take great effort to overcome its initial inertia, and sometimes it takes a long time of pushing to get it up to full speed. Collins uses the flywheel as a metaphor for business, describing how, with consistent effort and persistence, a company can achieve great things.

Clareity Security has been pushing on some great flywheels for some time, and it is perhaps overdue to describe those efforts to the industry at large. 

The flywheel pushing began ten years ago. A client of Clareity Consulting needed to solve the issue they were having with unauthorized system access and theft of data, which had become a national epidemic due – in part – to lax password-only security. Clareity Security was highly successful at solving this problem, and after a few years was protecting logins for half of the industry with the SafeMLSÒ product. However, there was always some pushback on information security measures as inconvenient. Security in every context is always a bit inconvenient – it would surely be convenient to never have to remember house keys and just leave the door unlocked, but we don’t want someone walking in and taking our possessions. Still, Clareity Security was listening, and started exploring new ways to provide security more conveniently, though the new authentication technology would take some time to develop and deploy. 

The push for greater convenience also led the way for something which is now a common term in our industry, “Single Sign-On” (SSO). Clareity Security, with NAR’s support, created an open-source toolkit and reference implementation for secure “SAML” SSO in 2007. Vendors started to deploy more secure single sign-on in more places, making life easier for real estate professionals, while still ensuring best practices for data security and authentication.

While SSO was gaining momentum, starting in 2009 Clareity Security began to deploy a new, “zero footprint authentication” solution – a way to provide security while providing the minimum inconvenience to end-users possible. Using new technologies, security could be provided without carrying around hardware “key fobs” to generate one-time-use passwords and without software installation.

But one of the greatest breakthroughs occurred in 2010, when Clareity Security integrated Miami REALTORS®’ various applications using SSO technology to create a single dashboard for their members. Thus, began Clareity Security’s SSO Dashboard initiative, which allows MLSs, Associations, brokers and others to present all the tools they provide to their subscribers (members, etc.) in one convenient location, and provide secure, convenient SSO.

Keep in mind that what you are seeing today is just the first version of the SSO Dashboard – what is coming in 2015/2016 will make our industry look back and say, “That was a good starting place.” Clareity Security is committed to growing and enhancing the SSO Dashboard to meet the changing needs of the busy real estate professional.

But, that was a lot of background – let’s look at this history in terms of what industry issues are being addressed with all this technology: 

  1. The Value Proposition Problem. MLSs, Associations, and brokers now have a platform for expressing the value they provide by keeping their whole offering in front of users and giving them convenient access through SSO.
  2. The 90/10 problem. Traditionally, 90% of users use 10% of the offering. The SSO Dashboard improves significantly adoption of tools by users. In a recent case study, adoption of certain applications rose by an average of 75% within the first month of Dashboard implementation.
  3. The Robust “Site License” Offering vs. Differentiation problem. There has always been a strain between MLSs wanting to provide everything needed for agents to be professional yet leaving room for brokers and agents to differentiate using tools not in use by others in their market. The SSO Dashboard allows organizations to provide a platform that provides just the right balance.
  4. The Convenience / Security problem. SSO, by its very nature, creates an increased security risk: more resources – including platforms holding the most sensitive information our industry handles (document & transaction management) are available behind a single login. Because Clareity Security provides a great security solution that can be combined with the SSO Dashboard, this risk can be mitigated.
  5. Centralized Identity. With the introduction of SSO Dashboard including a SAFEMLS Identity Provider (IdP), everyone benefits from not having to remember and synchronize different user IDs and passwords across multiple platforms.
  6. The Revenue Assurance problem. When logins are shared, the honest folks pay for the resources being stolen by others. Clareity Security makes sure everyone pays only their fair share by reducing the number of “freeloaders” and reducing unauthorized access.

Clareity Security continues to push hard on the flywheel and innovate. In 2014, we released SAFEMLS Plus, providing even better security, especially for mobile users – plus an improved interface for customers’ staff. Clareity Security is working with specialist interface designers now on creating an even more amazing experience to be unveiled in 2015. And that’s just the beginning. If we keep pushing and the industry pushes with us, we can expect that widespread SSO Dashboard deployments should eventually provide a personalized dashboard experience for the individual real estate professional. A tool that combines franchise, broker, MLS, association, and eventually even personal apps are the ultimate in convenience and power for real estate professionals.

It has been an amazing ten years for Clareity Security, growing from just being a security company to an integration company to a convenience and user experience company while retaining the best parts of its past while quickly moving toward its future.

When Clareity Security started on its journey, we had some good things in mind for our industry. But now that Clareity Security has been pushing for ten years, we can see great things ahead. An industry colleague once said, “We’re an industry driven too often by our fears and not enough by our dreams.” 

mattsretechblog: matt cohen (Default)
2014-08-24 12:00 am
Entry tags:

Solving the RETS Credential Re-Use Conundrum

How Many Times Must a Tech Provider Download the Same Listings?

I received a call recently from an MLS administrator who wanted to talk about a RETS issue that had been bothering him. His MLS charges a small fee to subscribers for a RETS feed; the fee covers the costs related to the feed, including compliance audits. He was noticing that many of the RETS credentials that subscribers were paying for weren’t being used and thought this was a bit of a mystery. Should he disable the unused RETS credentials and stop charging the subscribers? That course of action would make sense if his subscribers truly no longer needed the data. But there was a more likely culprit behind most of his mystery.

Quite often a subscriber’s RETS feed isn’t just associated with the subscriber, but with a third-party vendor providing IDX, VOW, CMA, statistics, and/or broker back-office systems to multiple MLS subscribers. Let’s say the vendor has downloaded the IDX data on behalf of one broker. If the vendor has 19 more customers associated with that MLS, does it really make sense for the vendor to download and store the data 19 more times, using the additional 19 RETS credentials? That seems like a real waste of server, bandwidth, and storage resources. On the other hand, suppose the vendor re-uses the credentials. Further suppose that the MLS administrator turns off unused credentials, the subscriber whose credentials have been used by the vendor to download data goes inactive, and his or her credentials are disabled by the MLS. The flow of data to the other 19 websites will be cut off. That’s not good!

Besides the potential for data disruption, there are other reasons why an MLS administrator may not like re-use of credentials:

1.    Credential re-use takes authorization control out of the hands of the MLS. If the vendor doesn’t know that a subscriber whose credentials they aren’t using has gone inactive, the vendor may accidentally service him or her using data obtained using another subscriber’s credentials.
2.    Similarly, re-use may defeat opt-outs for individual uses.
3.    The problem is actually even more complex if the vendor has multiple products. The vendor may download a superset of all data they need for a broker back-office use. Then, by re-using a subset of the data for an IDX site, the vendor may accidentally use fields and listings in certain statuses that would not normally be available to the IDX feed, inadvertently using the data inappropriately.
4.    Credential re-use partially defeats the use of data seeding, i.e., trying to figure out where exactly there’s a data leak.

Having unpacked some of the issues, there seem to actually be two questions regarding RETS credential re-use that need to be considered:

1.     Is it okay to re-use data feed credentials for multiple parties with the same use?
2.     Is it okay to re-use data feed credentials for one or more parties with different uses?

So, re-stating the conundrum simply: it’s terribly inefficient for all parties when vendors download and store multiple copies of data, one for each customer and credential, but there are valid reasons why MLSs have looked negatively at the practice of credential re-use. How do we solve this for everyone?

There are several possible ways to address the authorization control and opt-out issues including, but surely not limited to, the following:

The vendor can log in using all MLS-provided credentials at least once per day to figure out what subscribers no longer have rights to use data based on RETS login failure. They won’t download data with each login, just for one of them. But this way, the MLS will have a record that the vendor has checked whether a login / use is still active on the RETS server and should have taken steps to eliminate data use for that subscriber.

The vendor can be given a RETS login by the MLS that gives the vendor access to the roster, limited to a subscriber identifier and status (active, inactive). The vendor can use this to check if they need to stop re-using credentials on behalf of a specific customer.

RETS standard and server functions can be designed to return validation codes for all authorized specific MLS users and uses based on a single login credential, and return data based on that information. This will directly reflect the kind of master agreements and addendums that many MLSs have with these vendors already. If no MLS users are active and related to a vendor credential, the vendor credential will not provide data access.

The inappropriate data use issue is a bit trickier. It is an issue that can be mitigated today to some degree via very clear license agreements, vendors being careful to use the data subsets as specified by those agreements, and by MLSs auditing the end-uses of the data (i.e., the IDX websites and VOWs) – something they should be doing anyway. Additional mitigations may require some RETS standard and server-side function enhancements. For example, additional usage opting information can be passed to vendors where relevant. Also, a server-side function could be created to efficiently determine whether several credentials provide different data for a query – without downloading and comparing the data to the data on the client side. Knowing that different credential use would provide different data may make it easier for a vendor to know whether re-use is appropriate or not.

I don’t think there’s a way to fully resolve issue the data seeding issue while allowing credential re-use but tracking an issue down to who received the feed is still possible. Vendors just need to cooperate with any seeding investigation to help figure out what specific usage is involved. Data seeding is only of use in a very limited subset of illegitimate use detections anyway.

There are more conversations to have on this subject, looking at additional business and legal issues as well as technical reflections of those issues, but this is a starting point. Let’s figure this out, so that RETS service can be efficiently provided to stakeholders while addressing legitimate issues that arise with that efficiency. What’s next? Let’s discuss these and other ideas for solving the issue here on this blog, on Facebook, and perhaps at the upcoming RESO meeting and see if some consensus can be reached among both vendors and MLSs. If changes to RETS are desired, this can be dealt with in RESO workgroups and implemented by vendors as need be.

I know many vendors that simply must engage in credential re-use so they don’t overwhelm MLS RETS servers and so they don’t needlessly increase their costs to service multiple customers – but they don’t like being in violation of some of their license agreements with regard to credential use. I’ve even had clients fine such vendors – and while this is in accord with the letter of some current license agreements, it’s really not fair. These are not “bad vendors.” By not defining our standards, process and legal agreements to reflect the technical reality of data aggregation and use, we’ve created this ugly issue together. But together, we can solve it, and we should do so as quickly as possible.


mattsretechblog: matt cohen (Default)
2013-03-23 12:00 am
Entry tags:

Screen-Scraping – Finally, the Real Estate Industry Solution

In 2011 and 2012, Realtor.com was under the gun to solve the problem it had with screen scrapers, where sites were “scraping” data off of their site and using it in unauthorized contexts. For those that haven’t been watching industry news sites discussion of screen scraping, scraping is when someone copies large amounts of data from a web site – manually or with a script or program. There are two kinds of scraping, “legitimate scraping” such as search engine robots that index the web and “malicious scraping” where someone engages in systematic theft of intellectual property in the form of data accessible on a web site. Realtor.com spent hundreds of thousands of dollars to thwart malicious scraping and spoke about the screen-scraping challenge our industry faces at a variety of industry conferences that year, starting with Clareity’s own MLS Executive Workshop. The takeaways from the Realtor.com presentations were as follows:

1.    The scrapers are moving from Realtor.com toward easier targets … to YOUR markets.
2.    The basic protections that used to work are no longer sufficient to protect against today’s sophisticated scrapers.
3.    It’s time to take some preventative steps at the local level – and at the national/regional portal and franchise levels.

Clareity Consulting had wanted to solve the scraping problem for a long time, but there hadn’t been much evidence that the issue was serious before Realtor.com brought it up – and there hadn’t been any evidence of demand for a solution. Late last year, Clareity Consulting surveyed MLS executives, many of whom had seen the Realtor.com presentation, and 93% showed interest in a solution. Some industry leaders also stepped up with strong opinions advocating taking steps to stop content theft:

“It is not so much about protecting the data itself but protecting the copyright to the data. If you don’t enforce it, the copyright does not exist.”
– Russ Bergeron

“I am opposed to anybody taking, just independently, scraping data or removing data without permission…..We have spent millions of dollars and an exorbitant amount of effort to get that data on to our sites.”
– Don Lawby, Century 21 Canada CEO

The problem didn’t seem to be stopping – in 2012 (and still, in 2013) people continue to advertise for freelancers to create NEW real estate screen-scrapers on sites like elance.com and freelancer.com. Also, we know that some scrapers aren’t stupid enough to advertise their illegal activities. So, Clareity began working to figure out the answer.

There were six main criteria on which Clareity evaluated the many solutions on the market. We needed to find a solution that:

1.    is incredibly sophisticated to stop today’s scrapers,
2.    scales both “up” to the biggest sites and “down” to the very smallest sites,
3.    is very inexpensive, especially for the smallest sites – if there is any hope of an MLS “mandate”,
4.    is easy to implement and provision for all websites,
5.    is incredibly reliable and high-performing, and
6.    is part of an industry wide intelligence network.

Most of those criteria, with the exception of the last one, should be self-explanatory. The idea of an “industry wide intelligence network” is that once a scraper is identified by one website, that information needs to be shared so the scraper doesn’t just move on to another website, which takes additional time to detect and block the scraper, and so on.

Clareity evaluated many solutions. We looked at software solutions that can’t be integrated the same way into all sites and wouldn’t work, because the customization cost and effort would make it untenable. We looked at hardware solutions that similarly require rack space, installation, different integration into different firewalls, servers etc. – and similarly won’t work either – at least for most website owners and hosting scenarios. We looked at tools that some already had in place –software solutions that did basic rate limiting and other such detections, as well as some “IDS” systems websites already had in place – but none could reliably detect today’s sophisticated scrapers and provide adaptability to their evolution. The biggest problem we found was COST – we knew that for most website owners even TWO figures per month would be untenable, and all the qualified solutions on the market ranged from three to five figures per month.

Finally, we had a long conversation with Rami Essaid, the CEO of Distil Networks. Distil Networks met many of our criteria. They were a U.S. company, with a highly redundant U.S. infrastructure. They provided a highly redundant infrastructure (think 15+ data centers and several different cloud providers) allowing for not only high reliability, but an improvement to website speed. What they provide is a “CDN” (Content Delivery Network) just like most large sites on the Internet use to improve performance – but this one also monitors for scraping. We think of it as a “Content Protection Network” or “CPN”. Implementation is as easy as re-pointing the IP address of the domain name. They also have a “Behind the firewall” server solution for largest sites – more like what Realtor.com uses. Most importantly, once Clareity Consulting described the challenge and opportunity for our industry, they worked to tailor both a unique solution and pricing for our unique industry challenge. If adopted, using this custom solution Clareity can monitor industry trends and help the industry take action against the worst bad-actors.

Some MLSs have already successfully completed a “beta” and seen the benefits of both blocking scraper “bots” from their websites as well as the performance gains, and now more than a dozen other MLSs have already started their free trials and will be considering the best way to have all subscribers enroll their websites as a reasonable step to protecting the content.

If organized real estate actually organizes around this solution, allowing us to collect the data to stop the scrapers and go after the worst offenders, we will be able to get our arms around this problem once and for all.

For more information:
http://realestate.distilnetworks.com

UPDATE: Distil Networks has become a client of Clareity Consulting. Clareity is assisting Distil Networks reach out to the real estate industry with its solution for our industry’s critical problem.

UPDATE 2: Clareity's project for Distil completed - we hope we have provided them sufficient guidance so they can make an impact on our industry's security issue.
mattsretechblog: matt cohen (Default)
2012-04-18 12:00 am
Entry tags:

VOW and IDX Rules: Security Compliance in the Trenches

As a consultant often called on by MLSs for help with VOW and IDX compliance audits as someone who is always pushing for improved information security in the real estate industry, I love that information security is featured prominently in the VOW rules, section 19.5: “A Participant’s VOW must employ reasonable efforts to monitor for, and prevent, misappropriation, ‘scraping’, and other unauthorized use of MLS listing information. A participant’s VOW shall utilize appropriate security protection, such as firewalls, as long as this requirement does not impose security obligations greater than those employed concurrently by the MLS.” The last part of that rule is also reflected in optional IDX rule section 18.3.14. Auditing these rules has allowed me to help many brokers improve their VOW and IDX security and reduce the risk of an information security incident.

I’ve already written about guidelines for anti-scraping and monitoring and, although anti-scraping is a constantly evolving challenge, that article provides at least a baseline for evaluating VOW rule compliance.

But, what else should MLSs be looking for when evaluating VOW and IDX security?

First, as specifically mentioned in the rule, appropriate firewall protection must be established. When I audit a VOW, I look to make sure that there are only a few specific network ports open on the server – 80 and 443 as needed for the web server to function, and ports needed to provide a secure method of server administration, such port 22 – or 989 and 990. If ports like 21 and 3389 are open and actually used to administer the website, it should be a big compliance red flag because they are common security incident causes – and issues I see the majority of the time when auditing a VOW or IDX site.

Second, you want to verify that all the web server software is up to date and properly configured. That means checking the web server (IIS, Apache, etc.) version, the operating system version (when possible) and the platform (.NET, JSP, ColdFusion, WordPress, etc.) version, making sure that those are the most current versions or that newer versions don’t have fixes for significant security vulnerabilities. You might think that keeping systems patched would be second nature for a technology provider, but in my experience, it seems not to be the case.

Third, you want to evaluate any externally obvious security misconfigurations of the server and platform. Every server and platform has its own security configuration guidelines and it’s reasonable to expect that obviously poor configurations should not be visible to an external evaluator.

Fourth, and probably the most complicated part of evaluating VOW security, you want to evaluate application security – at least the OWASP Top 10 Vulnerabilities: Injection, Cross-Site Scripting (XSS), Broken Authentication and Session Management, Insecure Direct Object References, Cross-Site Request Forgery, Security Misconfiguration, Insecure Cryptographic Storage, Failure to Restrict URL Access, Insufficient Transport Layer Protection, and Unvalidated Redirects and Forwards. I usually evaluate Information Leakage and Improper Error Handling as well. Some of these items can’t be easily validated externally (i.e. Insecure Cryptographic Storage) though I’m always glad to hear that a web developer has encrypted the passwords and so cannot technically be compliant for VOW rule 19.3b. (“The Participant must at all times maintain a record of the name, email address, user name, and current password of each registrant.”). I’ve seen every one of these OWASP vulnerabilities while auditing VOWs and many times there are half a dozen issues on a single VOW.

If you’re a staff person at an MLS and a lot of the preceding read like gobbledy-gook to you or you don’t know how to audit security, you may want someone like me auditing VOWs and IDX sites for you, or at least auditing the security and anti-scraping related portions. It has been a blessing for the industry that the VOW and IDX rules give MLSs the opportunity to ensure that at least some reasonable security best practices are in place for VOW sites. I’ve had brokers tell me they were actually grateful someone was keeping an eye on their technology provider in this area, since they lacked the capacity to do so themselves and just figured that all appropriate measures had been taken.

Please keep in mind that website security is the smallest portion of overall brokerage security. Taking appropriate steps in terms of policies and contracts, physical security, account management and password controls, internal networking and computing, mobile device security, and internal web applications are all important. The NAR sponsored security workshops and security articles and blogs that I write, and which many MLSs and Associations reprint, are helping me reach some brokers and agents – but it’s a very difficult task to try to improve information security in this industry and I hope that I can count on my readers to act as security allies and spread the word.
mattsretechblog: matt cohen (Default)
2010-10-11 12:00 am
Entry tags:

Security Burnout

Take a look at the Google chart below – it’s interesting to see how, while information security, authentication and privacy issues continue to grow and news articles call ever more attention to privacy, the number of searches being performed on the subjects – an indicator of interest – decreases.

Google trends - interest in security vs privacy

It’s so easy to close our eyes to the problems around us.

Neither Clareity Security nor Clareity Consulting have wavered in their commitment to improving information security practices in our industry. The focus of the discussion regarding strong authentication may have shifted to an area that people have not burnt out on – ‘revenue assurance’ – but Clareity remains serious about the information security problems faced by the real estate industry.
mattsretechblog: matt cohen (Default)
2010-06-14 12:00 am
Entry tags:

The Four IT Guys

How Does YOUR IT Guy Respond to a Security Audit?

Just last week I was interacting with four organizations regarding significant information security vulnerabilities I had identified. Each had a dramatically different reaction to the news that they had issues to deal with. It reminded me of the Jewish Passover Seder (ritual feast), during which the participants discuss, in elevated language, “Four Children” – the wise, the wicked, the simple, and the one who does not know to ask. So, here are the Four IT Guys:

The Wise IT Guy

The Wise IT Guy is thankful for having a skilled resource helping assess his systems and is instantly attentive when he finds out he has a vulnerability to address. He is not defensive – he knows that even the biggest companies with the most resources have security vulnerabilities and, at some point, his organization will too.

The Wise IT Guy asks, “What are the vulnerability vectors and what are the best ways to defend against the problem?” And you shall answer the Wise IT Guy thoroughly, carefully describing all of the best practices and methods of deployment.

And the Wise IT Guy will use commercially reasonable best efforts to address the vulnerability, seeking it out throughout his organization and creating a plan, balancing the need for speedy remediation with the requirements of proper testing and deployment practices, including post-deployment testing.

The Wicked IT Guy

The Wicked IT Guy is against having anyone look at his systems in the first place, putting the audit off as long as he can, and placing his own fear and/or arrogance ahead of the good of the organization. He doesn’t take the vulnerability seriously and thinks he knows better than security specialists – he hasn’t been hacked yet, right? He knows he can baffle his CEO with BS and maybe talk his way out of dealing with the vulnerability.

The Wicked IT Guy responds to the news of the vulnerability: “What is this vulnerability to you? How likely are we to be affected by it? Will taking the steps you recommend make us perfectly secure?” In asking these questions, the Wicked IT Guy is isolating himself from well-known security best practices. Rather than taking the simple steps to address a vulnerability, he would rather spend that time rationalizing his own inaction. There’s no such thing as “perfectly secure,” but at home the Wicked IT Guy still locks his door on the way out – he just won’t take the obvious and simple steps to protect his organization.

Therefore, he must be rebuked with the explanation that, “These recommendations are made so you can comply with your organization’s information security policy.” By referring to the company’s policy, one is none-too-subtly letting him know that information security is about more than “IT”, and if he flouts it, he might find himself dealing, in his last meeting as an employee, with another two-lettered department, “HR.”

The Simple IT Guy

The Simple IT Guy asks, “What is this vulnerability?” and you shall carefully explain the steps to remediate the issue, offering to have calls with the vendors and contractors he will no doubt need to get his systems out of trouble. The Simple IT Guy usually means well, but will then sit, perhaps transfixed by the blinking lights on his computer, and do nothing. The Simple IT Guy will worry about the issue once in a while but not take action. Sometimes the Simple IT Guy has some smarts but does not understand how to balance information security with his other responsibilities. Either way, the CEO will usually need to be re-engaged regularly to make sure progress is being made to address the vulnerability. Unless the CEO is forceful about getting it fixed and engages help for his Simple IT Guy, the problem will likely persist.

The IT Guy Who Does Not Know to Ask

So far we have seen the Wise IT Guy who is both intelligent and pious about his security responsibilities, the Wicked IT Guy who is intelligent, but, let us say, impious or unwise about organizational security, and the Simple IT Guy – who means well, but lacks the capacity to manage all the technology with which he is entrusted. The IT Guy Who Does Not Know to Ask for help with information security is both unintelligent and foolish. He doesn’t understand the need for assessment and has neither the interest in nor the capacity to respond to security vulnerabilities when they are identified. This IT Guy is all about letting the issue drop – he hopes your first email on the subject will be your last, and that his luck in not getting hacked – or no one finding out about it –holds up. As with the Simple IT Guy, the issues will require thorough explanation of the vulnerability, but you may need to get his CEO to impress his duties on him in order to overcome his foolish nature – otherwise security vulnerabilities will surely persist.

So, as for The IT Guy Who Does Not Know to Ask, you shall say unto his CEO, “This is what your IT Guy needs to do for you in order that you may be more secure. And we must review progress regularly to ensure this is done.”

Working Toward Redemption

An organizational approach to information security is not rocket science – most of it isn’t even computer science. It starts with management making the commitment to best practices and engaging in assessment (auditing) and re-assessment. When I conduct an audit, I’m probably going to find issues. That’s normal, and it doesn’t make me lose any respect for the organization being audited or its employees. Where I do lose respect for some people, especially IT people, is when their response to the prospect of an audit or the results of the audit is poor and does not serve their organization well. As organizations, it is management’s responsibility to keep a watchful eye and, besides paying attention to the non-technical aspects of information security, make sure that The Wise IT Guy is on staff. If not, that liability must be mitigated in some way so that the organization is taking reasonable steps toward good information security practices.

mattsretechblog: matt cohen (Default)
2005-08-08 12:00 am
Entry tags:

The Convenience and Security of Single Sign-On

Improving Security While Gaining Efficiency Through Standards

Introduction and Executive Summary
Real estate professionals are using more systems and applications than ever, and they don’t want to have to log into each one separately. The inconvenience and inefficiency of multiple logins are exacerbated when users have to go back and forth between one system and another. As a result, system providers such as MLSs, larger brokerages and real estate application vendors, have moved to integrate commonly used systems as a convenience for the users. An example of this is when a public records system or transaction management system is integrated into the MLS. While the integration is sometimes done securely, Clareity Consulting has seen various examples of this integration being done insecurely in our industry. This white paper describes the problem in more depth and describes best practice solutions to the issue. The common name for this issue is ‘single sign-on’.

Clareity believes it is important that this paper be read by executives and technical staff of software vendors and their customers. If a vendor must choose between implementing security which does not sell systems on its own and a “sizzle” feature that does help sell systems, such as advanced mapping or an enhanced CMA, they will choose the latter. According to a leading real estate software vendor, “I’m delighted that you’ll be pushing this subject…Although we integrate rather un-securely with several other vendors, it’s very rare that we hear a complaint. No customers are pressing on us to fix this, but we want to see it happen!”

One can see how important it is for customers to understand the basics of security so that they can be more sophisticated consumers and help make security a priority for their software vendors. Software vendors can benefit from information such as that contained in this paper, so that they are ‘on the same page’ and can have more advanced discussions regarding how to work together to accomplish single sign-on securely.

The most important highlights of the paper are as follows:
  • Currently, single sign-on is not always accomplished in a secure manner.
  • This issue and its solutions are entirely separate from the ‘strong authentication’ subject addressed in Clareity’s “Protecting our Data” white paper.
  • There are several technical standards to achieve the security goal
    • The leading standard is SAML, though others merit careful watching.
    • These standards can be implemented very easily by software vendors.
    • The standards are simple enough to implement that it would not be difficult for a software vendor to utilize leading standards.
  • Selecting one standard would be ideal, but not doing so will not create a major problem or become a hurdle to achieving single sign-on.
  • There are commercial products and open-source code available which implement these standards.
    • The products are generally far more complicated, and in the case of commercial products, more expensive than needed to accomplish the single-sign on tasks needed by the real estate industry. Single sign-on can usually be accomplished by software vendors adding code to their existing products at minimal or no additional cost to the vendor or its customers.
    • A commercial product is not a standard. Rather, good products adhere to standards. There is no need for a company – let alone the industry – to select a single product because any code or product that implements a standard can work with any other code or product that implements the standard. Suggesting otherwise is like saying that if the real estate industry doesn’t use the same fax machine manufacturer there will be a disaster.
Note: Clareity would like to thank Bret Wiener, CTO of Rapattoni, for suggesting single sign-on as an important topic for Clareity’s March 2005 “IT Staff and Developer Workshop”, inspiring Clareity to research the topic thoroughly and write this paper.

Defining the Problem In More Depth

While there are various means of achieving single sign-on between computer systems, this paper will focus on standards pertaining to Internet-facing applications, especially standards pertaining to “federated identity”.

The OASIS standards group (www.oasis-open.org (http://www.oasis-open.org/)) defines federated identity as follows:

Federated identity allows a set of service providers to agree on a way to refer to a single user, even if that user is known to the providers in different guises. Most commonly, federated identity is achieved through the linking together of the user’s several accounts with the providers. This allows the user to get more personalized service without centrally storing personal information. Also, it gives the user fine control over when and how their accounts and attributes are linked and shared, allowing for greater control over their personal data. In practice, this means that users can be authenticated by one company or web site and be recognized and delivered personalized content and services in other locations without having to re-authenticate or sign on with a separate username and password. Federated identity infrastructure enables cross-boundary single sign-on, dynamic user provisioning and identity attribute sharing. By providing for identity portability, identity federation affords end-users with increased simplicity and control over the movement of personal identity information while simultaneously enabling companies to extend their security perimeter to trusted partners.

Wikipedia, an online encyclopedia, provides a good example:

A traveler could be a flight passenger as well as a hotel guest. If the airline and the hotel use a federated identity management system, this means that they have a contracted mutual trust in each other’s authentication of the user. The traveler could identify themselves once as a customer for booking the flight and this identity can be carried over to be used for the reservation of a hotel room.

When most people think of federated identity and single sign-on, they think of Microsoft’s Passport initiative and the controversy surrounding it. The difficulty people have had with Passport is that it demands the user have the same Passport account (username and password) on each system, it stores enough personal information that it is considered too tempting a security target, and many people don’t like the idea of trusting Microsoft with their personal information, though theoretically that information would only be used to provide a better Internet experience as the user went from site to site. This controversy directly inspired one of the major single-sign standards efforts, Liberty, which will be discussed later in this paper.

Note that while accurate identification (such as strong authentication via a security token or passkey) of the user is ideally a predicate for single-sign on, it is neither the same thing, nor a requirement. There are plenty of systems that currently use usernames and passwords (not strong authentication) to identify users and these systems already have engaged in the practice of single sign-on. Though some more complex and expensive products market these security mechanisms as a package, the processes of authentication and single sign-on are distinct. That said, Clareity encourages the use of both strong authentication and single sign-on as separate, but related, solutions which will each benefit the industry.

Should Single Sign-On Be Used?

Single sign-on can be simple or complex. At its simplest, one program is merely telling another that it is being sent an authenticated user. At its most complex, a variety of information about the user is stored and passed between applications relating to what resources or functions the user should be allowed. The more complex the system is, the more expensive it becomes and more likely it is for something to go wrong. It also makes it more likely for the system to be abused. In the more complex environments, according to Jeffrey Rozek, senior manager of Ernst & Young’s Security & Technology Solutions division, “Implementation is usually too costly. There are too many mixed environments to tie together. Proper infrastructure components don’t always exist. The technology [1] is still maturing, and it’s difficult to define the core identity.”

Thankfully, the single sign-on scenario in real estate is fairly simple. Clareity does not see a significant barrier to its use if proper steps are taken, such as implementing standards in a secure manner and taking additional security-related steps such as adding proper audit logging to the user hand-off. The real estate industry should be able to utilize standards to achieve its goals without implementing any of the complex and expensive single sign-on software packages designed for and sold to large enterprises.
Regardless of the mechanisms chosen for single sign-on, the security risk of using single sign-on must be recognized: single sign-on involves having one application trust another or various applications trusting a central application. If one application or the central application is compromised, the systems that trust them can also be compromised. While single sign-on is desired by users for the convenience it offers, vendors must take care that each system involved is properly secured, especially the system being trusted to establish identity.

Single Sign-On: Dangers to Avoid

If one is to provide single sign-on, it is more important than ever to provide it securely. For example, a security flaw in the single sign-on integration between an MLS and a Transaction Management System could lead to private consumer information being exposed, or to competitors accessing each other’s information.

Clareity has seen plenty of integration specifications where a user is passed from one system or another using a link looking something like this:

http://www.example.foo/index.aspx?userid=123&password=mypassword

This example is troublesome in several ways. First, it passes the authentication information using a plain text protocol, so that it may easily be intercepted. Second, the URL parameters are not encrypted, and can be easily scripted or modified, since the security mechanism is exposed to the user for manipulation. Third, this URL can be disseminated and used by others. In the worst case, this URL may be posted in a non-secure location and even indexed by a search engine or posted on hacking sites or newsgroups (e.g. http://johnny.ihackstuff.com/index.php?module=prodreviews).

Using ‘hidden’ form fields to pass this information from site to site does not provide significant additional protection, and neither does trying to check the referring web page.

Different Mechanisms for Single or Reduced Sign-On

There are various mechanisms or methods for single sign on – or at least reduced sign-on. Each method has advantages and disadvantages, as described below.

Using “password synchronization”, when the user changes their password on one system, a synchronization server updates passwords on all other systems – or on a central server. This provides reduced login, not single login, and is only a fit for companies exclusively using password protection.

Using a “login aggregator”, authentication information is cached using cookies or on a central server and sent to sites that request it. While this is easy, it only pertains to web applications and out sources trust to a third party. This loss of control over user information is generally unacceptable. Microsoft’s Passport is an example of this type of technology, and the Liberty Alliance has been an example of the backlash to Microsoft’s attempt to “own” single sign-on.

There are various other methods to achieve single sign-on, including:

•    Using local credential storage or cookies, information is stored on the local computer and sent to applications as needed.
•    Using code client-side that automatically logs the user into multiple systems
•    Writing single sign-on code into server-side applications and having them communicate without client-side interaction

Of these methods, the ones involving the client-side usually involve storing usernames and passwords on the client computer, which would cause security problems in the real estate industry or in any other industry where computers are often shared. All of these methods involve server-side integrations that can be expensive.

Using the “authentication platform” to provide single sign-on can ease the integration effort needed for more complex single sign-on initiatives, if this method was required for some reason, care must be taken because it creates a single point of attack and failure, and a denial of service attack can devastate all of the protected resources. An open system model is generally better for linking dissimilar systems and helps eliminate the single point of failure. It can be argued that the management of network identify is most efficiently handled in a single location. This can be true in a large enterprise environment, but the dissimilarity between linked real estate systems and the resultant wide range of user attribute information that would need to be administered on a central system makes complex single sign-on systems with authentication built-in impractical.

Single Sign-On Using Standards

Though each technical standard for single sign-on is different, each of them takes an authenticated user and passes information from one system that can be used by another system to determine the user’s entitlement to access the system. To deal with more complex needs, the user’s attributes can be used to determine which elements of the recipient system the user should be able to access. Again, single sign-on via identify federation is separate from initial authentication. One can use any authentication mechanism and combine it with any code or product that deals with federation – as long as each system involved can implement the same standard for federation. This is the case in an open systems and open standards approach to single sign-on.

As stated in the OASIS FAQ referenced earlier, a standard “abstracts the security framework away from platform architectures and particular vendor implementations.” To use a non-technical example of how standards make it less important to choose a specific implementation, people don’t all need to buy a fax machine from the same manufacturer because all manufacturers’ fax machines implement the same standards (EIA-465 and EIA-466; CCITT T.4 and CCITT T.30).

High level descriptions of some of the major single sign-on standards follow:

Security Assertion Markup Language (SAML) is managed by the OASIS standards group and involving many industry partners. According to the OASIS web site, SAML is an “XML-based framework for creating and exchanging security information between online partners”. SAML takes “Assertions” consisting of authentication, attribute and authorization information present in a web application, describes a “Transport” mechanism (SOAP over HTTP via SSL or TLS), and uses “Bindings and Profiles” via bilateral authentication or digital signature. All of this information is sent from one web application (the “Asserting Party”) and another (the “Relying Party”). More information about SAML is available at http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=security

The Liberty Alliance Project builds on and extends existing standards (SAML 2.0, SOAP, WS-Security, XML, etc.). According to their web site, Liberty “Enables identity federation and management through features such as identity/account linkage, simplified sign on, and simple session management” and also “enables interoperable identity services such as personal identity profile service, contact book service, geo-location service, presence service and so on.” Learn more information about the Liberty Alliance Project.

Rarely allowing standards to go unchallenged, Microsoft and IBM introduced WS-Security in mid-2002 and recently made its intentions known to submit further competitive standards based on web services called WS-Federation, WS-SecureConversation and WS-SecurityPolicy. According to an article in Network World[2], these additional standards will be submitted in September 2005. Many companies have learned to never underestimate Microsoft, so this initiative bears watchfulness. For more information on the Web Services Federation Language, see http://www-128.ibm.com/developerworks/library/specification/ws-fed/

Which standard should be used? It is difficult to say, and the boundaries between the standards are blurred. Here are two different opinions from different real estate software vendors:

Vendor One:
“My general understanding is that SAML is now supported by WS Security and, of course, always has been supported by Liberty, and so the real issue is WS Security versus Liberty. Even that may be a false choice with IBM now a member of Liberty and Sun and Microsoft cooperating more after their settlement a year or so ago — it seems most likely that the two standards will or are being merged. As a general matter, our history as a [Non-Microsoft] shop leads us to be more inclined to support Liberty, and the whole genesis of WS Security from Passport makes me nervous. We use a ton of IBM products now, though, and so their support of WS Security makes that just as easy for us to support. I like the more distributed nature of Liberty but one should rarely side against Microsoft in making platform decisions. That could mean that the best approach is simply to support both, but…life would be so much easier to just support one. So, I guess if there is a choice, we would suggest Liberty as the focus.”

Vendor Two:
“We recognize that different vendors use different platforms, have different outlooks & philosophies, and aren’t always motivated to cooperate with the competition above their own interests. WS Security is our pick, but of course, we’re already tightly aligned with Microsoft technologies. Liberty is probably a dead horse, and SAML (though exciting to us) assumes the XML revolution…WS Security has a nice, inherent infrastructure and should be easy to adopt for nearly every vendor I’m familiar with.”

Clareity polled a number of other real estate software vendors on this subject and there were a variety of opinions, but a strong bias towards use of open standards.

Another opinion on the choice found on the Sun Microsystems site indicated, “For SSO basics, SAML would suffice. For sophisticated functions, for example, global sign-outs, attribute sharing between providers, and so forth, you should use Liberty. To include the special capabilities, such as Web services, personal profile, and discovery service, Liberty is your answer.” [3]

Vendors have a great amount of choice in implementing these open standards. The standards can be implemented as a part of existing products or using one of the dozens of commercial products on the market, or even using open-source solutions. Again, it is not necessary for technology vendors to standardize on a single product, because any product that implements a standard can work with any other product that implements that standard.

Clareity believes it is reasonable and practical for most software vendors to support SAML (perhaps with the Liberty extensions) and WS Security. This combination would enable a vendor to support single sign-on with virtually any other vendor or system.

Conclusion

As users expect or desire single sign-on for convenience, the real estate industry must take steps to ensure that single sign-on is accomplished in a secure and responsible manner in order to protect private consumer information and valuable proprietary content.

As stated earlier, while accurate identification of the user is ideally a predicate for single-sign on, it is neither the same thing, nor a requirement. The processes of user authentication and single sign-on are entirely distinct.

Real estate industry software companies and organizations are able to utilize one or more existing open technology standards to achieve single sign-on goals without implementing any of the complex and expensive single sign-on software packages designed for large private enterprises. The diverse and fragmented nature of our industry makes it impossible and impractical for everyone to agree on one product or solution as long as open standards are followed, different single sign-on solutions – ‘home grown’, open-source code, and commercial solutions – can each be implemented successfully at a reasonable level of effort and cost to each organization. Attempts to control or “own” single sign-on in the real estate industry should be discouraged, because they are certain to lead to higher costs for end-users and less flexibility and control for the industry’s stakeholders.

[1] http://searchsecurity.techtarget.com/originalContent/0,289142,sid14_gci1076620,00.html? bucket=NEWS
[2] http://www.networkworld.com/news/2005/071405-ws.html
[3] http://developers.sun.com/prodtech/identserver/reference/techart/federated.html#4
mattsretechblog: matt cohen (Default)
2004-08-01 12:00 am
Entry tags:

Protecting Our Data: The Next Step to Improve Security in the Real Estate Industry

Strong Authentication Needed to Protect Against Login Credential Sharing and Theft

Introduction
MLS data is the core asset of the real estate industry, and it is crucial that it be protected. An increasing number of MLSs and real estate companies have asked Clareity to research online system access control and data protection methods in order to come up with security solutions that are more reliable than the current username-password mechanism. Unauthorized system access and theft of data has become a national epidemic due to lax password-only security.

Over the past two years, Clareity has performed significant research on data protection and user authentication (system access control). We first presented “strong authentication” as an important security component for MLSs to consider implementing during our presentation at the Council of MLS meeting in 2002 and at subsequent Clareity Workshops. We believe it is now time for the industry to take action to improve its lenient security practices.

To quote Bob Butters, Partner with the law firm of Arnstein & Lehr, Chicago, IL (formerly Deputy General Counsel at NAR and a lawyer with the FTC):

“Unauthorized use of Participant access codes can lead to MLS liability and loss of MLS’ valuable proprietary rights. Not only can the economic value of the MLS’ database compilation be undermined, but non-public showing instructions falling into the wrong hands can lead to MLS liability for personal injury and property damage.”

The following pages:
•    Explore the security issue facing real estate web site and MLS operators
•    Explain “strong authentication” and how it solves the current issue
•    Present the most viable and practical authentication solution

The Current Security Issue
The simplest and most common form of authentication (the process of verifying the identity of an individual) used in IT today is user login name and password. However, there is a fundamental flaw with this method of authentication: there is no guarantee that the user of the password is the owner of the password.

According to a poll conducted by the Human Firewall Council (now known as the “Information Systems Security Association”):

•    52% of office workers polled would download company information if asked to by a friend
•    42% would tell a friend their password
•    64% already gave their password to a colleague
•    2 out of 3 gave their company password to the pollster!

A separate survey conducted in April 2004 by the organizers of the Infosecurity Europe conference found 71% of office workers were willing to part with their password for a chocolate bar. (Yes, a chocolate bar.) Some 37% of the workers in that survey immediately gave their password to the pollster, and prompting questions raised that number substantially. There is no reason to believe that the real estate industry protects its passwords any better.

As long as there has been an electronic MLS, agents and brokers have shared their MLS passwords. Clareity has found MLS members sharing their login information with technology vendors, friends and family, and even the consumer. With the advent of easy-to-use web-based systems, password sharing has become even more widespread. Whether these passwords are used by “harmless” non-member users, such as part-time Realtors who are not paying for MLS service, or unauthorized data pirates, the simple truth is that MLSs are no longer members-only systems. MLS system passwords – and access – are simply out of control nationwide.

Recent postings to MLS e-mail groups confirm the real estate industry is in need of a solution to stop the illegal access and distribution of MLS data. Andy Duplay, MLS Director of the Toledo Board of Realtors, asked, “How do we … not let others ‘capitalize’ on this and share ID and password among licensed staff in their office, to avoid paying MLS dues?”

To quote Carl DeMusz, President and CEO of Northern Ohio Regional MLS (NORMLS):

“We have … stiff fines for sharing user names and passwords. That does not mean we can now sit back and not police it … You can’t be weak on your enforcement if you want to control the data and keep it from falling into the wrong hands.”

“I think we need to remember that the MLS has been given protective custody of the listing broker’s listings. I see many MLS’s these days taking that for granted … If we as MLS’s can’t prove to be trusted with the listing broker’s listings there can be retaliation and there could be a price to pay for not getting our house in order.”

John Mosey, President of Regional MLS of Minnesota, said:

“Most of us are blissfully, and in most respects, willfully, unaware of what is happening to the data. We jump all over any cases of misuse or piracy that come to our attention, but I believe what we don’t know is where the true threat to our rights of ownership will be found. It is most certainly the duty and responsibility of MLS Executives to be constantly on guard regarding the security of our systems and data.”

Controlling access to the MLS – and to transaction management systems, broker systems, and other real estate information systems – is now more critical than ever, since it is no longer just listing information at stake. MLS systems now include a host of contact management and CRM applications that store personal information about clients and prospects. With transaction management system adoption on the rise, access to the MLS system now provides entrée to a whole new world of sensitive property and personal financial information. While these systems have not yet been subjected to scrutiny under the FTC’s Gramm-Leach-Bliley Act Standards for Safeguarding Customer Information, Clareity believes that higher security standards are only a matter of time, given the expansion of sensitive data stored in MLS and transaction management systems.

Password secrecy is now, more than ever, subject to the frailties of human nature:

•    Forgetting passwords
•    Writing passwords down
•    Sharing passwords
•    Using a common password for all their accesses
•    Stolen passwords

User techniques for avoiding password security obviously negate the intent and purpose of the password. Therefore, reliance on this form of login authentication has caused MLS access to become inadequately controlled. It is only a matter of time before there is an embarrassing and damaging public incident involving a breach of privacy on an MLS system.

In order to deal with the inherent weaknesses in password-based login authentication, which both protects MLS dues revenue and provides a proper level of protection for listing, consumer and financial information, Clareity recommends that MLSs implement “strong authentication.”

Strong Authentication Solves the Password Problem
There are three factors of authentication: 1) something you know, for example, a password; 2) something you have, for example, a smart card or other token; and 3) something unique about you, such as a fingerprint.

That third item, “something about you,” is commonly referred to as “biometrics.” Biometrics eliminates the problems associated with password management by measuring human characteristics such as voiceprint, fingerprint, iris pattern, and facial contours, which are virtually impossible to duplicate and cannot be lost like traditional passwords. While biometrics can be a very powerful tool, Clareity does not recommend it for MLS or other real estate systems because of the high implementation and support costs, especially in our industry’s distributed and mobile work environment. Therefore, Clareity focused its strong authentication research on combining “something you have” with “something you know.”

A common example of this type of strong authentication is your ATM or bank card. Such cards are called ‘tokens’ in IT security parlance. Tokens require something you have (your card), and something you know (your PIN). You wouldn’t want your bank to allow access to your account with just one of these factors – the risks are far too great. Yet in the real estate industry, sensitive data can be accessed with just one factor – a weak, memorized password! Strong authentication eliminates password risks by providing multi-factor authentication. Best of all, strong authentication is not new behavior for humans, as we can all relate to the ATM card example.

“Something you have” can take several forms, but is generally classified as follows:

•    A physical smart card or USB token
•    A ‘digital certificate’ / software token

The best form of these tokens generates a dynamic, one-time pass code that the user enters to access the system. Most of the vulnerabilities of memorized passwords (sniffing, guessing, hacking, sharing etc.) are eliminated if the user requires a different password each time they log into the system, as any pass code obtained or given to a non-member user would be outdated as soon as it was used.

Instead of entering a username and password into their login screens, users type in their user name, push the token button (and possibly a PIN code for extra security), and enter the pass code that the token displays. It’s simple, fast and painless. Once a pass code is used, it can’t be used to log in again.

There are hundreds of companies offering various types of strong authentication products today. Clareity spent significant time researching and identifying which company could provide the right solution for the real estate industry.

About Secure Computing
Clareity selected Secure Computing as the leading candidate for providing an MLS strong authentication solution based on several factors:

•    The company’s proven track record and company stability
•    The full range of the security solution offering
•    24 x 7 x 365 staff support
•    History of innovation
•    Ease of integration with other systems
•    Competitive pricing

The company and its strong authentication product are extremely well regarded in the security industry. In 2002, SC Magazine rated it the Best Buy and gave SafeWord PremierAccess five stars out of five in every category (features, ease of use, performance, documentation, support, value for money). According to SC Magazine, the product offers “Great choices, strong security, extremely scaleable and ease of manageability exceeded expectations.”

To quote Frank Gillman, Director of Technology at the large law firm of Allen, Matkins, Leck, Gamble and Mallory LLP, “I wish every technology we deployed was this successful. PremierAccess worked right out of the box.”

Secure Computing is also eager to work with Clareity to tailor its security products and services for MLS and the real estate industry – and that process has already begun.

Track Record and Stability
Secure Computing (NASDAQ: SCUR) has been securing the connections between people and information for over 20 years. Specializing in delivering solutions that secure these connections, Secure Computing is qualified to be a security solutions provider to organizations of all sizes. The company has over 11,000 global customers, including the majority of the Dow Jones Global 50 Titans and the most prominent organizations in banking, financial services, healthcare, telecommunications, manufacturing, public utilities, and government. Secure Computing has close relationships with the largest agencies of the United States government, including multiple contracts for advanced security research.

SafeWord PremierAccess customers include hundreds of leading firms in banking, finance, government, and various high-tech industries, totaling over three million end users. Deployments range from very small companies and government organizations to one of the world’s largest aircraft manufacturers with 100,000 users and one the world’s largest banks with 800,000 users. There is no question that the SafeWord PremierAccess solution can scale to meet the needs of even the largest MLS.
Secure Computing is consistently rated to perform at or above the rest of the market segment, and revenues for this profitable and growing company have more than doubled in the past four years:

Year    2000    2001    2002    2003
Revenues    $34.64M    $48.35M    $61.96M    $76.21M

Secure Computing is headquartered in San Jose, California. (For more information on the company, see http://www.securecomputing.com.

Full Range of Solutions
Secure Computing’s SafeWord PremierAccess solution provides many authentication options, including passcode-generating tokens, digital certificates, smart cards, biometrics, and text-messages to wireless devices such as cell phones, pagers and Palm Pilots. An MLS can mix and match any or all of these solutions to offer the best, most flexible solution for all of its members.

Clareity believes that a combination of any of the following three forms of strong authentication will make the most sense for the real estate industry:

Silver 2000
In a convenient keyfob package, Silver 2000 authenticator tokens generate one-time passwords with the simple touch of a button. These passwords, like those generated by the Platinum or Gold 3000 tokens, can be used once, ensuring secure access. No PIN is required to activate this authenticator, although MLS customers may wish to take advantage of the SoftPIN feature, which allows the addition of a PIN entry with the token-generated password providing two-factor authentication.

MobilePass
MobilePass transmits one-time passwords as text-messages directly to most wireless phones, pagers, or PDAs. This zero-footprint solution provides the security of a token without requiring any other hardware or client software – MobilePass works with the devices users already have. MobilePass sends one-time passwords to the user’s cell phone, pager, or wireless PDA.

SofToken II
Secure Computing’s software-based token, SofToken II, is for users who want the security of one-time passwords but do not want to carry a hardware token. SofToken II generates a dynamic password just as the handheld tokens do, but the SofToken II software resides on the hard drive of the user’s laptop or desktop system.

24 Hour Support
Secure Computing provides 24x7x365 staff support. Clareity will help your organization through the process of integration, and provide staff training and on-site assistance during implementation. Optionally, Clareity can also provide end-user training during the deployment.

Clareity will also function as a front-line account representative for staff and on-call troubleshooting partner with Secure Computing.

Comparison with the Competition
In May 2004, Secure Computing’s strong authentication solution was awarded the Editor’s Choice and the only “A” rating by Network Computing’s Secure Enterprise Magazine in a head-to-head comparison with other market-leading authentication products. Those products included ActivCard’s ActivCard Token, RSA Security’s RSA SecurID, and Vasco’s Vacman Middleware and Controller.

Further comparing the Secure Computing solution with its competition:

•    The US Department of Justice performed tests on both Secure Computing tokens and those of a major competitor – including washing machine, freezer and oven tests. Secure Computing tokens had a 95% survival rate while the competitor’s tokens had a 95% failure rate
•    Secure Computing has the simplest integrations, as documented in all of the magazine articles quoted above.
•    Secure Computing has the only automated deployment and self-enrollment mechanisms Secure Computing does not charge for failover servers, while the competitors (at least those that support such important functionality) charge extra.
•    Secure Computing does not charge extra for 24x7x365 support

Secure Computing’s competitors simply do not offer the variety and quality of products, and while some of them sell cheaper authentication tokens, their extra costs add up.

Secure Computing’s solutions offer world-class authentication security and Clareity estimates the total cost of implementation, including 24 x 7 technical support, to be in the $1.50 to $3.50 per member per month range, depending on the security solution and financing options selected by the MLS.

According to Eric Hemmendinger, Research Director, Security and Privacy of the Aberdeen Group, “Fitting together authentication and access control solutions from different suppliers is an integration nightmare that is usually handed off to high-priced consultants. With the introduction of SafeWord PremierAccess, Secure Computing is delivering pre-integrated solutions for access control and user authentication that are plug-compatible with existing applications. The result for IT buyers is less time and money spent on integration, and a better fit between the old and the new security solutions.”

When Security Pipeline tested a number of competitors in the strong authentication space, including Secure Computing’s SafeWord PremierAccess, ActivCard’s ActivCard Token, RSA Security’s RSA SecurID, and Vasco’s Vacman Middleware and Controller, the PremierAccess product came out on top again:

“So who wins? Secure Computing’s SafeWord PremierAccess offers one of the easiest ways to build AD-linked hardware authentication into your security plan. It’s our overall winner because its schema modifications are performed according to Microsoft standards and its plug-in approach to integration makes tying it to AD easy.”

Clareity is confident that Secure Computing is the best company to provide strong authentication to the real estate industry.

Clareity and Secure Computing
After considering the aforementioned test results and rigorously comparing available solutions, Clareity has become Secure Computing’s vertical market partner for the real estate industry. Clareity will be working to help Secure Computing tailor its world-class security technologies to provide powerful and practical products for real estate. The customized solutions will provide security across a broad spectrum of real estate IT systems and will be specifically designed for mobile professionals. They will provide MLSs and real estate companies with heightened security while being easy to use and inexpensive.

Clareity will help your organization through the process of tailoring and implementing the right security solution for your MLS. This includes planning the installation with your vendor or staff, system integration, staff training, and providing on-site assistance to guarantee the success of your deployment. Clareity supports Secure Computing products and functions long-term as the company’s front-line account representative for staff and as its troubleshooting partner. Clareity’s ongoing relationship with Secure Computing will help ensure your organization receives the highest level of service over the life of your contract. Clareity can also coordinate cooperation and compatibility in markets where there are multiple or neighboring MLSs implementing Secure Computing’s products.

Ease of Product Integration

Secure Computing has already developed integration methods that bypass the usual nightmare of application integrations that have plagued other authentication solutions. The following integration methods will make it easy and economical to integrate the Secure Computing solution with applications used by the MLS and others in the real estate industry:

Universal Web Agent and Web Login Server
Many organizations providing extranet access or web content have a range of different web servers, such as IIS, iPlanet, Apache or custom application servers. Most access control solutions rely on agents or plug-ins that install on the Web server software, which are limited to specific Web server software or versions. PremierAccess solves these problems with the Universal Web Agent. While most agents protect a specific program, the Universal Web Agent protects the entire operating system that the Web Server operates on.

The Universal Web Agent protects both Windows 2000 and Solaris 7&8 servers. PremierAccess also includes a Web Login Server (WLS) that works in conjunction with the UWA. The WLS is an independent Web server that you can install on the same box, or on a separate box from the UWA. The WLS intercepts Web traffic, requests authentication, and verifies user identity with the PremierAccess AAA (authentication, authorization, administration) server. Once a user is authenticated, a session cookie is created with a session ID number and role-based authorization information. These credentials are passed to the UWA, which enforces what pages or Web resources the user can access.

SafeWord PremierAccess support for VPNs
SafeWord PremierAccess adds critical strong authentication to positively identify a user before an encrypted VPN tunnel is established–an essential component of any secure VPN solution. Secure Computing offers robust and scalable solutions that work with all major VPN vendors including Cisco (Concentrator 5000 and Altiga 3000), Check Point (VPN-1/FireWall-1), Microsoft Windows 2000, Nortel (Contivity), and Alcatel (7130 Secure VPN).

SafeWord Agent for Windows Domain
The SafeWord Agent for Windows Domains is a Windows domain login module that lets companies provide secure access to NT and 2000 domain-based networks.

Additional Integrations
Secure Computing provides integration modules for Citrix®, TACACS+, NT remote access server (RAS), Novell Modular Authentication Service (NMAS), Pluggable Authentication Modules (PAM) and SID2.
Software Development Kit (SDK)
Going even further, Secure Computing, published the PremierAccess SDK (software development kit), designed to enable developers to add SafeWord authentication to their own applications. The SDK contains code, documentation, strategies for building configurations, and sample programs that demonstrate how to use the API. The SafeWord Authentication SDK provides several advantages to developers. It includes a facility for authenticating passwords using SafeWord technology. The SDK allows programmers to easily add SafeWord capability to their applications and is ideally suited to both new development and retrofits. SafeWord’s SDK integrates into applications developed for Microsoft NT, Solaris/SPARC, Linux, and other operating environments.