mattsretechblog: matt cohen (Default)
2019-03-03 09:16 am
Entry tags:

Media Manipulation: MLS Action Required

Many people may not realize that numerous listing images uploaded to the MLS, used on broker and agent websites and apps, and syndicated for display on other Internet sites, are heavily modified. While this can be a very effective sales tool, some undisclosed manipulations may be deceptive. National MLS policies and other steps should be implemented to address the risk this causes.

At the 2019 Clareity™ MLS Executive Workshop, we invited Peter Schravemade from BoxBrownie.com to present “Ethical Marketing & Photography,” to help frame the beginnings of the national conversation that is needed to create such policies. Such policies would have implications for listing maintenance and compliance, data standards, IDX and VOW rules, agreements with photographers, as well as data license agreements for listing content sent to third parties.

It is not the purpose of this blog to propose restrictions on media manipulation that might be counter-productive to the selling process. Rather, the focus is to consider how MLSs might help subscribers improve their current marketing practices while reducing their risk, as well as reducing consumer dissatisfaction that may result from undisclosed, possibly deceptive, manipulations.

Media Manipulation Can Be Good - Even Necessary

Peter provided an example of an overexposed listing photo, where important details such as the view out of the window could not be determined. The view may be desirable, like an ocean view, or undesirable, like a sewage treatment plant. He described the common process that professional photographers use, taking multiple photos at different exposures to capture all the important details, then compositing them into a single image to maximize the detail presented. This technique, called “bracketing”, is technically media manipulation, but is entirely ethical and done entirely in the service of providing an image that is most like what one would see with one’s own eyes in the room. In fact, not using bracketing to ensure our hypothetical sewage treatment plant is shown, does not properly represent the listing. There are other basic edits, such as straightening a photo accidentally taken at an angle, that are also clearly not unethical and should be of no concern to us and should not require disclosure.

Types of Media Manipulation

There are several practices that are common and useful in the sales process but may be deceptive, depending on the implementation. Such deception may be intentional or unintentional. For example:

  • Image enhancement – The intent of this edit is to return the image to what a buyer might see when visiting the property but which the original photograph may not have captured. For example, a grey sky may be changed to a blue sky to reflect a sunny day. In some areas of the US, and most of the world, it is acceptable to add green grass. Issue: overuse of this type of manipulation - for example, adding grass to an area where it would not or could not normally grow well - can misrepresent the property.

  • Twilight conversion – The intent of this edit is to demonstrate the property at a time of day which evokes a positive emotional response in a buyer. It is quite often already commonplace as a manipulation when the photographer has been unable to photograph the property at sunset as desired. Issue: Photographers may change the color of the sky and add a sunset, but sometimes when they do so they put the sun in the wrong location. When this happens, the viewer may think certain rooms will get southern or other exposure when they will not. The viewer may think they can enjoy sunsets from the pool when sunsets would be blocked by the house. This misrepresents the property.

  • Item removal – The intent of this edit is to remove clutter that will be removed before the sale / or that is not part of the sale process. Issue: An overzealous editor can easily misrepresent the condition of whatever they imagine lies behind and beneath the clutter. Obviously, editing out undesirable things such as power lines, poor views, and property condition issues, is deceptive and unethical.

  • Virtual staging (Item addition) – The intent of this edit is to demonstrate to a purchaser what a space could be by adding photorealistic furniture. When executed well, this is an effective and harmless edit. Issue: If not performed extremely carefully, it is easy to misrepresent the size of the room by adding virtual items that are not in actual proportion to room measurements. Images of light sources that imply a fixture is present where when none is installed would be deceptive. Images of items that would normally convey with the property but are not actually present would be deceptive. If a condition issue is being obscured by the items added, it would also be deceptive.

  • Virtual renovation – The intent of this edit is to demonstrate to a purchaser the potential of a property (by, for example, adding a pool), or removing an objection (like adding a kitchen, or renovating an abandoned property) This manipulation removes everything from a room and leaves it looking like it is already prepared for painting and other finishing. Issue: If not disclosed well, it may be misleading if the viewer believes the image is of the actual condition of the room. After all, not only might getting the room to that state be expensive, but in the process of actual room preparation one might find other conditions that increase the cost of actual renovation.

  • Renders / CGI / Hybrid CGI – The intention of this edit is to demonstrate what a property might look like before it has even been constructed. Issue: The reality of what is constructed is rarely identical to an artist rendering. If the viewer does not understand that they are looking at an artist’s creation and not present reality, it could be deceptive. This should be disclosed.


Ensuring that media manipulation is disclosed is important for a couple of reasons. Obviously, we do not want to mislead brokers, agents, appraisers or consumers. No one wants to waste time visiting a property that is not in the condition indicated by photos and other media. The accuracy of professional property valuations that depend on manipulated images of the property or comparable properties, could suffer. There may be lawsuits by people who purchase a property without validating the veracity of each listing image. Finally, as we consider a future where computers could use artificial intelligence to create data about a property based on the related media, we would not to accidentally rely on a manipulated image and create incorrect data.

Actions Suggested for the MLS Industry

The MLS industry has a strong interest in the accuracy of listing information, including media. The property should be represented accurately by media, and neither professionals nor the purchaser should not be deceived. Ideally, we should implement a national MLS policy regarding media manipulation that is easy to understand and uses correct terminology so that it is understandable both by real estate professionals and media creators.

The following actions should be considered:

Create an implement a national policy regarding media manipulation. Require disclosure. It must be easy to understand.

Educate MLS subscribers on photography “common sense”, explaining where a technique may be deceptive and explaining their responsibility in vetting the manipulation performed against the property being sold to ensure the image is not deceptive. MLS subscribers should also be taught how to spot media manipulation providers that create deceptive images, intentionally or otherwise, and how to report issues to the MLS. It may be desired to share best practices in establishing contracts with such providers, including the obligation of providers to provide those purchasing their services information about what changes were made to each image, and such that risk regarding accidental or intentional deception is not entirely held by the listing agent and others that use the media. If media manipulation might possibly be deceptive as described above, MLS subscribers need to understand their responsibility to disclose the manipulation.

Create a standard disclosure as a part of the media manipulation policy. Peter Schravemade provided the following language as a starting point for discussion:

This image is an artist’s impression of what the property ‘might’ look like. As such the image has been digitally modified. [ABC REALTY] suggests you conduct your own due diligence into the state of the property or request a statement of what has been modified from the brokerage.

Rules regarding the display of such disclosures, in the media themselves as a watermark or displayed prominently in proximity to the media inside the MLS, on IDX/VOWs, and wherever the content is syndicated, should be a part of policy.

Establish RESO data standards for storing and transmitting information about media manipulation. Each type of media manipulation listed above may be an enumeration of the field. Peter suggested an additional enumeration: “A digitally activated fireplace or appliance”.

Once there are policies and data standards related to media manipulation, make changes to the MLS listing maintenance software so this data can be managed.

Consider if and how MLS rules and data license agreements may be amended to protect parties that use the media from risk due to deceptive media manipulation that was missed by the listing agent.

Last Words

Media manipulation has become less expensive and is increasingly commonplace. Each of the types of manipulation described above can be a perfectly legitimate and valuable sales tool - when executed correctly and disclosed. Creating MLS policies and taking the related actions described above should help us maintain professionalism and ethics and reduce risk for those using manipulated images.


mattsretechblog: matt cohen (Default)
2019-02-11 07:37 am
Entry tags:

MLSs Must Continually Articulate Value

To improve and maintain subscriber satisfaction, multiple listing services must continually articulate the value that they provide, both as an organization and through products and services.

In more than half of the MLS strategic planning processes I facilitated over the last year, pre-planning surveys and other research showed most subscribers were unaware of many of the products and services the MLS provides in exchange for their access fees, and less awareness directly correlated to less satisfaction with the value provided for those fees. For some clients, where I performed more detailed research, it was apparent that even when subscribers were aware of a product’s existence, they were not aware of specific features or recent enhancements and their benefits. Raising awareness about the value the MLS provides as an organization and through products and services is difficult, but it can and must be done. Not putting resources into such communications and making sure the effectiveness of those communications is maximized would be penny-wise and pound-foolish.

Value of the MLS

Too many subscribers still think of the MLS as the way to access a database. When it comes to promoting the value of the MLS organization, the “Making the Market Work” campaign released by the Council of MLS back in June of 2017 is still the best resource for MLSs to adopt and promote through all channels. I’m still surprised how many MLSs have not adopted this campaign on an ongoing basis. If you’re an MLS executive, at your next board meeting, try giving your board members a short quiz about the value of MLS – if even they can’t articulate the tenets of confidence, connections, and community to at least some degree, more work is certainly needed.

Value of MLS Products and Services

There’s a saying in the industry: “Realtors Don’t Read” or “RDR” for short. I dislike this saying because I have worked with so many professionals over the years and they DO read … if the message is interesting. All too often the headline, Facebook post, or tweet reads something like “[MLS] Releases [Product Name].” If I was a busy professional I wouldn’t click through on that either.
Professionals are primarily motivated by four types of benefit-oriented messages:

  1. This will help you make more money (“profit”)

  2. This will save you time (“ease”)

  3. This provides insight into your business (“control”)

  4. This will reduce risk or prevent you from falling behind (“fear” or “fear of missing out (FOMO)”)

So:
  • “Read about the new changes to listing input” will not generate nearly as many click-throughs as “New listing input feature saves you 20 minutes per listing”.
     
  • “Learn more about [product]” is not as effective as “This [MLS] agent closed 18% more transactions this year by using [product].”
     
  • “Sign up for [product] classes” will not generate as many click-throughs as “[Product], offered as an [MLS] benefit, helps you close transactions 15% faster. Click for a 5 minute video with everything you need to know.”
     
If when the professional clicks through from calls to action, there is a short web page – possibly with a short video – that continues to sell the benefit of a product and how to get started, you will start to see more product awareness and adoption. And, if they’ve already tried the product but aren’t seeing the benefits, this type of messaging may well get them to look closer at how they are using the product. Either way, they are being reminded of the value provided by the MLS.

Targeting and Improving Opting

Another important MLS communications trend has been to improve targeting and opting.

Still, some MLSs are still not targeting properly and sending every message, even those that would only apply to some subscribers, to every subscriber by email. Or, they present messages that apply to only some subscribers to all subscribers in an MLS post-login popup. If an MLS doesn’t target properly, it’s easy for subscribers to burn out on the quantity of irrelevant messages they receive. Ensuring subscribers are sorted into groups based on role, product and service use, and other factors is key to targeting communications properly. If messages are not targeted, subscribers may decide to opt-out – an MLS communications disaster!

Another problem I still see at some MLSs is that they have a single opt-out – all or nothing. These days, when you try to opt out of most websites, they encourage you to opt out of certain types of communications rather than all types, and MLSs should follow suit. Just the other day, I received a brokerage email about the value of my ex-house and when I clicked to opt-out, it opted me out of communications about that property only. I would have had to navigate the site further to opt out of more than that. Some MLSs say they can’t have a more sophisticated opting system because their marketing system doesn’t provide for it. If that’s the case, it’s time to find a new marketing system.

Continuous Improvement

This article has covered just a few components of how MLSs can improve and maintain subscriber satisfaction by continually articulating the value that they provide, both as an organization and through products and services. Some readers will remember that I literally spent hours at the 2014 Clareity MLS Executive Workshop providing a more comprehensive review of communications best practices – it’s a huge topic! Still, with just a bit of planning, by taking more care to use benefit-oriented language with subscribers to articulate the value of what the MLS provides, and by improving targeting and opting methods, MLSs can dramatically improve subscriber engagement and satisfaction over time.


mattsretechblog: matt cohen (Default)
2018-12-07 08:08 am
Entry tags:

MLS Products & Services: From Principle to Practice

There are many technological "shiny objects" MLSs are presented with for site-licensing or offering a la carte. How should these be evaluated by MLS leadership? Here are some thoughts from a presentation I previously made at the Clareity MLS Executive Workshop. First, I'll examine NAR's approach and suggest some potential updates and then I'll describe some additional principles that should be considered.

The NAR Approach


NAR has created three categories, basically:
  • CORE  = MLS MUST provide
  • BASIC = MLS MAY (include)
  • OPTIONAL = MLS MAY (offer, a la carte)

CORE

This includes information, services, and products are essential to the effective functioning of MLS, as defined, and include current listing information and information communicating compensation to potential cooperating brokers.

But how is MLS defined by NAR?
  • a facility for the orderly correlation and dissemination of listing information so participants may better serve their clients and customers and the public,
  • a means by which authorized participants make blanket unilateral offers of compensation to other participants (acting as subagents, buyer agents, or in other agency or non-agency capacities defined by law),
  • a means of enhancing cooperation among participants,
  • a means by which information is accumulated and disseminated to enable authorized, participants to prepare appraisals, analyses, and other valuations of real property for bona fide clients and customers, and
  • a means by which participants engaging in real estate appraisal contribute to common databases.

The phrase "listing information" seems too limited, given all the kinds of information resources MLS subscribers expect these days, the types of information being standardized at RESO, and all the types of data needed for a core MLS system or database to interoperate with all of the various tech tools in use by MLS subscribers. I would suggest that perhaps the definition of MLS could use a little updating by eliminating the word "listing" rather than trying to create some kind of all-inclusive list.

BASIC

This is determined locally and provided automatically or on a discretionary basis, and includes items such as: sold and comparable information, pending sales information, expired listings and “off market” information, tax records, zoning records/information, title/abstract information, mortgage information, amortization schedules, mapping capabilities, statistical information, public accommodation information, MLS computer training/orientation, and access to affinity programs.

Some brokers and broker groups have declared many things out of scope for MLS:  agent websites, CRM, property marketing tools, showing systems, transaction management systems, and MLS public-facing websites. One large group complained loudly a few years ago about MLSs pushing NAR to add as many items as possible to the list of ‘basic’ MLS functions to force participants to pay for them, whether they want them or intend to use them or not. Since that time it was determined that in-person training could not be mandated - clearly things are in flux.

But it seems obvious that showing systems could be considered critical infrastructure for efficient cooperation. The case for inclusion could be made for other items on the brokers' list as well. How do we know who’s right, and what belongs on that CORE and BASIC list and what doesn't? I do NOT think we should be evaluating the distinction between these lists to serve the interests of the "lowest common denominator" of MLSs OR go wild adding items to the list willy-nilly. I DO think we need to apply some additional principles and I'll come back to that.

OPTIONAL

An MLS may not require a participant to use, participate in, or pay for the following optional information, services, or products: lock box equipment including lock boxes (manual or electronic), combination lock boxes, mechanical keys, and electronic programmers or keycards; advertising or access to advertising (whether print or electronic), including classified advertising, homes-type publications, electronic compilations, including Internet home pages or web sites, etc.

Do Not Pass Go...

Those of you who attended the 2012 Clareity MLS Executive Workshop probably remember the cautionary note about anti-trust tying violations - how easy it is to get into trouble if one creates an illegal tie between one product (e.g. membership / subscription) and another when expanding a product offering. I won't dive into the details here, but I encourage readers to remember the four elements of such a tie:
  1. There must be two separate products;
  2. There must be a tie;
  3. The seller must have enough power in the market for the typing product so that it can impact trade in the market for the tied product; and
  4. A certain amount of sales for the tied product must actually be impacted by the tie.

I'm insistent that we must continue to re-evaluate the definition of MLS precisely because defining the product set that reflects the function of MLS (versus a separate product) is such a core part of the testing.

Another Approach: Principle


What are the principles by which MLS information, services, and products belong in the categories of core, basic, and optional? I not only believe we must more clearly define MLS but also clearly define the principles that are considered when evaluating the categorization of products.

Each bullet point in the definition of MLS services effectively spells out a core principle, which is a test for how appropriate it is for a product or service to be included in the basic MLS package. In short, at least given the current MLS definition:

A. Manage/Disseminate info so participants better serve clients, customers, and public.
B. Means for compensation offer
C. Enhance participant cooperation
D. Participants: appraisals, analysis, valuation for clients
E. Participant appraisals

But let's look at some other elements for consideration:

Principle 1. Network Power. Does the product or service require many or all MLS subscribers to use it to achieve benefits from it? Professional collaboration tools (i.e. transaction management and showing systems) would fall under this principle, unless they interoperate sufficiently that collaboration can occur without everyone using the same system.

Principle 2. Economic feasibility. Does the product or service help participants better serve their clients but is it economically or otherwise infeasible for any one participant to field the product or service on their own? 

Principle 3. Integration. Does the product or service require a level of integration into core systems that would not be feasible from an economic and/or interface perspective if every broker or agent selected their own? Note that ability to integrate continues to evolve.

Principle 4. Economic Interest. Is there an overarching subscriber economic interest?

Note that principles 1-4 help to refine consideration and categorization of items already considered relative to A-E (or an updated MLS definition that drives a different A-E). And, of course, all has to be considered against the potential for creating an illegal tie.

During my presentation at the Clareity MLS Executive Workshop we considered a number of product examples and evaluated them against 1-4 and A-E. That's the approach I'm suggesting MLS leadership take as they are approached with "shiny objects".

Additional MLS Considerations

An MLS is unlikely to go through a process of product evaluation unless the product appeals to subscribers, that is, it fills a subscriber need. But choices must also be made based on whether the product is strategic for the MLS and its subscribers in some manner, how important and urgent it might be for the MLS to field the product at that time and, of course, cost. Also, MLSs typically have limited capacity to roll out new products and continually encourage adoption of those products - again, choices must be made.

Deciding what to do when a product or service is not as well adopted as desired, or if there is dissatisfaction with it, is a topic for another blog, another day.

Next Steps for the Industry

I would suggest that industry leaders collaborate on the following:
  • Consider how we might modernize the definition of MLS (perhaps beyond just fixing reference to “listings”). Think about what we aspirationally want the MLS industry to become - again, a subject for another post.
  • Refine core (and basic) MLS  services as a standard to reflect that new definition. Phase-in over time to allow MLSs to determine strategy for coming up to snuff (on their own or together).
  • Add four additional principles to the section of MLS policy relating to service characterization (core, basic, optional)
  • Remove lists from policy and put them in a “best practices” document explaining how each product/service (and new ones) relate to definition and principles.
  • Run it all by anti-trust attorneys!
mattsretechblog: matt cohen (Default)
2018-11-26 07:54 am
Entry tags:

Strategic MLS Issues for 2019

Some time ago, in preparation for an industry publication, another industry consultant asked me, "What are the most significant issues facing the MLS by 2020, what actions do you propose be taken to address these issues and what are the desired outcomes?" Following are my answers:

Expand what cooperation via MLS means.  If the perception of MLS is just about a place to advertise the homes with some private fields, MLS is very vulnerable. If we make it more explicit that cooperation is about a lot more than that, the MLS can grow stronger.

To accomplish that, we can:
  • Develop standards for compliance and enforcement: data standards and business rules, data distribution, maintaining "fair" online advertising using the compilation (IDX / VOW) and other uses.
  • Develop core standards for MLS data integrity business rules.
  • Consider the kind of government regulation the industry could be facing with regard to wire fraud and get ahead of it - MLS can be a part of that, if organized. Information security practices will be a part of that - much of the steps needed to deal with wire fraud take place during the "cooperation" phase where the MLS is, or could be, involved.
  • Develop and implement standards for brokers and agents ("With teeth") re: responsiveness to showing requests, participation in secure electronic communications and transaction management. Develop these as MLS monitored areas with supported core functions as needed. As elements of cooperation, these would seem to fall within the jurisdiction of MLS. What other areas, technological and otherwise, could be considered "cooperation" and be an MLS function?

There are many challenges ahead for MLS - I think it would be ideal to firm up its value and capabilities in this area.

Consolidation: A unified industry would be more capable of managing risk.

To encourage consolidation, we could:
  • Develop and mandate core standards for MLS, based on CMLS best practices.
  • Drive recognition of strategic issues driving consolidation BEYOND the local service needs and Overlapping Market Disorder (OMD) - for example, managing risks described in the DANGER report, information security, and legal challenges (without depending on subsidy from the national level). Positioning based on OMD alone was unfortunate because that is only one of the drivers for consolidation.
  • Determine the MLS consolidation end-game (per my earlier blog on the subject). When I speak on the subject or consolidation or am helping MLSs achieve it, I create a map based on consumer-oriented data that allows us to productively discuss the end-game. This market-driven endgame map should drive tactics to achieve consolidation, driving them based on consumer needs and the professional access needed to serve the consumer rather than existing industry structures. Add the other broker and core standards factors and we should have a better picture of the end-game we are aiming for.
  • NAR itself could get the right people from each organization sit down together to work consolidation out. Peer to peer asks have had some effectiveness, but progress is slow. Not everyone comes to MLS conferences like the Council of MLS or the Clareity MLS Executive Workshop to understand why consolidation is so important.
  • Some of the states have, in the past, worked against consolidation - this must be discussed at the national level. Shared services at the state level solve some problems, but are stopping others from being solved.

Front End of Choice

Most MLSs are not ready to unbundle for FEoC: providing a core "pipe" of information and allowing for product choice, including products provided through the MLS organization, through brokerages, and purchased by agents themselves. Though I don't think this is an especially important trend to move forward - other others described above are much more urgent - I don't think this trend is going to go away. If the industry is going to support this trend, potentially incompatible licensing processes and the per-user cost bundles MLSs have created will need to be addressed. A few large MLSs (MRIS, CRMLS, Northstar MLS)  developed the technical infrastructure to support this - and it is not easy! Others have focused on providing FEoC for alternate means of MLS data access while providing a single core MLS system.  Once further data standards are created that are needed to support this type of business, there may be increasing competition for this core infrastructure.

Now, brokers are dealing with a similar issue today, with some deploying suites but many more deploying best of breed products, integrated as best they can without data standards. While many brokers are happier when they create this best of breed solution, but it's expensive and difficult because of that lack of data standards. There's more inertia in the MLS space to stick with the single-vendor suite plus a few integrated products approach due to licensing models, but data standards in the MLS space are more advanced than in the broker space and alternate front ends are starting to emerge - so I expect this is going to happen. Again, "How urgent and important is this?" is a question that needs to be asked when considering this initiative - and the answer is not going to be the same for every MLS.

Other Issues

Every MLS has different issues to address in strategic planning. For example, in the last few plans I have facilitated in 2018, "Front End of Choice" didn't come up at all from subscribers or the leadership, while the other two issues did in one form or another - along with other local issues. Some of those local issues are ones I've seen recently in several organizations and could well be covered in another blog post.

mattsretechblog: matt cohen (Default)
2018-10-12 12:00 am
Entry tags:

Security Auditing compared with Penetration Testing

Recently, while we were discussing a contract, an industry executive needed me to give an explanation of the difference between a “security audit” and a “penetration test.” The party with whom the executive was negotiating the contract had changed the contractual requirement from the one to the other. Since this might be of interest to others, I’ll provide the explanation here on the blog.

The short explanation is that a “penetration test” is just one small component of a “security audit,” and that the software / service provider was aiming for a much lower bar than we were aiming for on behalf of the executive and her MLS. Following is an explanation of how much lower it is.

A penetration test is an attempt to find weaknesses in the defenses in a computer system’s security. Sometimes the term is used in reference to non-computer security, but typically not. The test usually consists of a combination of automated and manual testing to find specific attacks that will enable the attacker to bypass certain defenses, then working to find additional vulnerabilities that can be found only once those initial defenses are defeated. As a result of this testing, risk can theoretically be assessed and a remediation plan put in place. One can also evaluate how well an organization detects and responds to an attack.

One significant problem with penetration testing is that, if the system being tested has good outer defenses, a penetration test may not find security risks lurking inside of the defenses. It may then present as its finding that risk is low and no action is needed. Then when there is a change to the outer defense that causes a vulnerability (or a vulnerability is discovered in that outer defense) actual attackers will breach all of the inner layers where security has not been well designed. It’s a well-known principle of security design that one designs multiple levels of security (a/k/a “defense in depth”). Penetration testing, on its own, doesn’t reliably measure whether this has been done properly.

For the more technical readers, I will provide an example:

Let’s say a web application has been written with excellent protections against SQL database injection attacks. The penetration testing is run against the application, no SQL injection issues are found, and the system passes with flying colors. But, let’s say the web application had full database server privileges (a “db_owner” role for MS SQL or a DBA or user with too many global privileges on MySQL), and that the database platform service had full system privileges (an “Administrator” role user on Windows or “root” on Linux). Or let’s say that someone didn’t have mySQL’s “Outfile” disabled along with other issues allowing remote file access. Then, one day, a programmer makes a single mistake (that never happens, right?) and all of the poor configuration behind the web application is exposed and a hacker can easily grab database contents and take over the database server or even the whole network. I’ve taken an attack exactly that far – from the login prompt on the outside of an application – during a sanctioned security audit, of course!

The other problem with using a penetration test as the sole way to measure security is that information security is a much broader area of exploration, normally measured during a full security audit. Most of the breaches we’ve had in this industry have been the result of weak policy and procedure or the contracts that reflect those policies, inadequate human resources practices, and physical security issues. Still other technical security issues have resulted from a lack of protection against screen scraping, and yet others from authentication related issues – both items that most penetration testing tools cannot easily uncover. Looking at the security configuration of network equipment, servers and workstation operating systems, platforms, and installed software, mobile devices, antivirus, printer and copier configuration (yes, I really said copiers!), password selection, backup practices and so much more – all of that falls outside the purview of typical penetration testing and is only reliably addressed via a full security audit.

There’s nothing wrong with using a penetration test as a part of a security audit – just don’t mistake the part for the whole.
mattsretechblog: matt cohen (Default)
2018-03-14 12:00 am
Entry tags:

Report from Clareity’s 17th Annual MLS Executive Workshop

A Short Preamble

Clareity’s annual MLS Executive Workshop took place in Scottsdale, Arizona and, as it has every year, the event was sold out. Responding to our post-event survey, MLS executives made comments like:
  • “Great event this year. Always the best networking.”
  • “Informative and insightful. Can’t wait for next year’s workshop.”
  • “As always – groundbreaking and “explode your brain” wonderful!”
  • “For me, this meeting truly does start the conversation for the year.”
  • “Love the size. Good content, all substance.”
  • “This was the best MLS Workshop ever.”



Here are some takeaways from the Workshop:

Welcome / State of Industry

Gregg Larson shared observations from 2017 and an outlook for 2018 and beyond. This included a broad roundup of consumer technologies, covering the MLS and tech vendor merger and acquisition trend, and focusing on new brokerage models and what they could mean for traditional brokers and the MLS. While many in the industry describe these brokers from a point of fear, Gregg’s focus was more about how these companies seek to meet consumer needs and how we all can adjust to industry change based on those needs. Later in the Workshop, Clareity gave both Redfin and OpenDoor an opportunity to explain how they work well within the MLS community – more on that later in this report. Gregg also thanked the sponsors that let us run a quality show at a low cost for attendees.

Major Brokers Demand Better MLS Security

HomeServices of America is advocating for vendors to adopt more stringent security measures – including MLSs. HomeServices’ CIO, Alon Chaver talked about how they are beginning to work with several MLSs on this and how they intend to expand on that effort with other MLSs. Clareity’s been beating the security drum for over twenty years now, so we welcome HomeServices to the effort.

Alon’s takeaways for MLSs:
Minimize outbound data distribution- provide only the necessary data, data sets and data fields required to perform the contracted services. Require contractual assurances at contract renewal – commitment from vendor(s) to secure the data they receive. That includes not sharing data with third parties unless authorized and vetted for security, minimizing programmatic access via APIs, requiring breach notifications, insurance coverage and indemnity provisions.

If your organization hasn’t yet begun an organizational security program or just wants a fresh set of eyes on your security practices, please contact Clareity’s Matt Cohen to discuss moving forward.

Managing Compliance for the new Mandatory Waiver Policy

Carolina MLS has had a waiver policy in place for a while now. Steve Byrd explained the various conditions where his MLS allows waivers, the scope of waiver use, and their 3-part process for catching “cheaters”:
  1. Using Listing Data Checker software to look for co-listing violations. Since subscribers are not allowed to co-list with non-subscribers, they use the tool to search for keywords such as “co-listed” and related terms as well as listings with an email address, web URL or phone number in the Public, Agent and Company Remarks, or Directions field that might be a sign of co-listing.
  2. Clareity’s SAFEMLS + RISK product works as a constant deterrent. Nonetheless, using the tool Carolina MLS issued 19 notices/warnings to subscribers for password sharing and unauthorized use of the MLS and issued four significant fines for password sharing.
  3. Reports by Agents. When agents ask what to do if the selling agent is not an MLS subscriber or can’t find that agent in the roster, the MLS investigates. Carolina MLS has fined and back-billed six times since 2013.

Leadership Lessons

In a session facilitated by Denee Evans, Two CMLX3 graduates, Colette Stevenson and Stan Martin, shared leadership lessons learned from their CMLX3 experience. There’s no way to sum up such a complex conversation easily, but one of the most interesting parts of the session was when Colette and Stan talked about learning about their strengths and weaknesses as a part of the process and how they improved their management capabilities as a result.

Blockchain: What does it mean for the MLS?

NAR / CRT’s David Conroy explored Blockchains, which provide a verifiable and trustworthy record of events or transactions, and David showed how this new technology could evolve to:

    Improve property records
    Greatly reduce cost of business for all parties
    Reduce risk in real estate transactions
    Help buyers & sellers get to closing table faster

Standards Evolution Supporting New Innovations

CRMLS’s Art Carter presented information about the growth of RESO, casestudies from myTheo, Homes.com, and other demonstrations of RESO successes, as well as highlight videos from RESO’s DataComp event and how standards evolution is supporting true innovation in the real estate technology space. In the myTheo example, that company reduced product time to market from 6-7 weeks down to 3-4 weeks and reduced staffing resources needed to launch in a new MLS market by 30-40% – all by using a certified RESO feed.

MLS Copyright and the “Spark” of Creativity

Matt Cohen moderated a panel including Mitch Skinner, Claude Szyfer, and Brad Bjelke to discuss the copyright office re-evaluation of whether MLSs can copyright the compilation based on “creativity”. A status update on the copyright office discussions and NAR’s role in them was provided. In an especially fun part of the session, Brad role-played making arguments on behalf of the copyright office while Mitch and Claude argued against him in an adversarial fashion. We discussed what MLSs could do to increase the creativity of the compilation. We also discussed whether use of data standards (common field names and enumerations) could reduce creativity of MLS compilations and cause issues – the answer to which is “yes, at least some” – but that can’t get in the way of data standards adoption and there are lots of other ways these compilations are creative. We need to better demonstrate just how creative they are to the government.

Future of IDX and VOW

Matt Cohen, homes.com’s Andy Woolley, and Fantis Group Real Estate & Clientopoly’s Tony Fantis talked about the myriad issues of the current IDX and VOW policies, and presented some visions of how policy could be changed to allow brokers and their vendors to provide more innovative uses of IDX/VOW data. One vision was very large in scope but evolutionary, while the other vision was more revolutionary. The reasons for each approach and the pros and cons of each was discussed. We hope those in the audience on relevant NAR committees – and those that influence those committees – will pick up the ball and run with it.

Should MLSs be Supporting Successful Agents?

Xplode’s Matt Fagioli presented a vision of how agents will be successful going forward with technology and what MLSs could be doing to support them. Many tools were discussed, but some MLSs said there was one big takeaway for them: figuring out how to help their agents take advantage of Instagram, since according to a 2017 Forrester report, Instagram has a 2.2 percent per-follower interaction rate vs Facebook at only 0.22 percent.

MLS Consolidation

T3 Sixty’s Kevin McQueen gave the audience some interesting statistics to think about: we’re down to 677 MLS organizations: 88 Regional MLS serving 80% of REALTORS® and 20% served by the remaining 600 or so MLSs. Kevin described how a useful tactic to initiate discussion is to take inventory – looking at duplicate listings, subscribers, and listing agents, and quantifying the waste of the inefficiency – putting a dollar figure on it that makes sense to stakeholders. He suggested that the important thing to do to get the ball rolling on consolidation is to get groups in the room together – sometimes with state association leadership, with 2-3 larger MLSs (not just one “gorilla”), and involving brokers. Kevin suggested we may want to focus on the most severely overlapping markets, especially the nine states containing over 350 MLSs.

The Future of Brokerage and What It Means for MLS

Rob Hahn & Sunny Lake Hahn ended day one with an entertaining look at brokerage. They described a barely profitable traditional brokerage model, the increasing pressures brought by agent teams that “own the kitchen table” where the relationship with consumers is formed, additional pressures brought by 100% commission shops, and even more pressure being brought by technology brokerages. They described a potential future for brokerages being run like a professional services firm, with equity partners and employee associates, and a future where there may be fewer brokerages and agents. Perhaps MLSs will need to establish a different financial model to address shrinking membership, and MLSs may need to continue to evolve policy with relation to teams.  In summary, Rob and Sunny exhorted MLSs to stop fighting, understand the pain brokerages are experiencing, and help “save them.”

A Tragedy of the Commons

Redfin’s Chelsea Goyer presented Redfin’s pro-MLS point of view, countering a “think tank” article that invoked Redfin’s name and painted MLSs negatively. She talked about an article she and Glenn Kelman had written about this called “A Tragedy of the Commons”.  According to Wikipedia, “The tragedy of the commons is a term used in social science to describe a situation in a shared-resource system where individual users acting independently according to their own self-interest behave contrary to the common good of all users by depleting or spoiling that resource through their collective action.” In the view published by Chelsea and Glenn, it’s important for brokers to support the shared-resource system that is MLS. Chelsea also talked about the need for MLSs to consider how membership should be (and feel like) a privilege, how MLSs could be more transparent, how it can be modernized, how important data standards are, and how MLS consolidation should make things better for brokerages. This was a great session!

Zillow’s Listing Ecosystem

From listing input to listing distribution, Zillow’s Errol Samuelson described how they work within existing MLS infrastructures. The Bridge Interactive API provides RESO platinum certification including the data dictionary and additional fields and extended datasets. He demonstrated a management interface with a great design, and reporting capabilities.  Errol also explained how the solution could be used to not only manage data distribution from a single MLS, but also to “bridge” multiple MLSs into a single feed for brokers and their vendors. He also showed a mobile-friendly listing input system that complies with MLS business rules. This solution is live in Atlanta, and coming soon to Rhode Island, Huntsville, Boston, and Oakland / Berkeley.

Personas as a Way of Better Understanding Subscribers

MLSListings’s Dave Wetzel presented a different approach to understanding subscribers better using personas. A persona is a fictional character who embodies certain essential characteristics, such as attitudes, goals, and behaviors, of a particular subset of the users of a product or service. Personas are constructed using sample data collected from actual users. Constructing personas creates internal focus and understanding of your users across your teams and organization. They help internal teams empathize with users, including their behaviors, goals, and expectations. They help the company talk in terms of user needs, and they support better decision making where users are concerned. Dave described the five personas they identified in their MLS and suggested other MLSs do similar research to create their own personas to guide their efforts.

Managing the DANGERs and Other Risks

While managing risk is an important business function, many MLSs feel the risks we face are too big to manage. Matt Cohen helped explore what can MLSs do about the DANGERs (from the NAR D.A.N.G.E.R. Report) and other risks, as discussed in “MLS 2020 Agenda”. Some risks are too much for smaller MLSs to manage – which is one reason MLS consolidation is important. Some risks may even be too large for larger regional MLSs, and may take cooperation to address. The session covered five risks, re-evaluating them for probability, timing, and impact, and providing an initial set of risk mitigations for discussion. For example, mitigations for the information security risk include:
  •     Security Assessment & Remediation
  •     Strong Authentication
  •     Anti-Scraping (MLS resources)
  •     Anti-scraping (IDX requirement)
  •     API Security
This session should inspire good conversations during MLS strategic planning!

Homesnap (Broker Public Portal)

Gregg interviewed Guy Wolcott, the Founder of Homesnap. He asked probing questions about measuring success of the effort, and about how the company plans to achieve greater success in the future.

How to Capture, Communicate and Close in today’s “On Demand” World

Realtor.com’s Bob Evans described Realsuite, their new product which includes “Respond”, which quickly delivers responses to client inquiries, “Connect”, which provides a contact management system and includes market data reports, and “Transact”, which organizes documents and tasks and includes form integration and electronic signatures.

Power of the Network and Site Licenses

Matt Cohen moderated a panel including Lone Wolf / Instanet Solutions’ Joe Kazzoun, Showing Time’s Michael Lane, Real Safe Agent’s Lee Goldstein and CSS’s Kevin Hughes. Panelists described the conditions under which products are optimally site licensed, versus “a la carte” licensing or provided as one of several choices. Each described the benefits of site licensing for their product, and the panel discussed the hybrid model of licensing core features but upselling additional capabilities to individual users. Finally, we discussed data standards and how companies may choose to share data – or not share it – with business partners, competitors, and the consumers. While standards make it possible to move data more easily, business, legal and privacy issues all affect whether data will be shared.

OpenDoor.com

Gregg Larson interviewed Kerry Melcher, GM from OpenDoor.com, the original and leading iBuyer in the country. The way their brokerage works is that sellers request an offer, the brokerage creates an offer to buy the property itself – rather than trying to find a buyer to buy the property immediately. If the seller is interested the brokerage then conducts a home assessment and, if repairs are needed the seller can make the repairs or deduct costs from the offer and the brokerage will make the repairs. Payment then happens in just a few days. OpenDoor then maintains the property and finds a buyer.  As discussed during the session, it’s important to note that the company buys at retail and sells at retail – this is not about buying low and flipping homes. OpenDoor works with buyers too. Buyers can use their app to gain access to the homes they have for sale, and every home comes with a 30-day satisfaction guarantee.

While a lot of people are afraid of the change that iBuyers might bring to the industry, Kerry made it clear that they work cooperatively with other agents all the time and conform to MLS rules. Also, by making the transaction easier for consumers, they believe their approach will result in more transactions and more money for the real estate industry overall.
Final Words

It’s “business as usual” at Clareity. If you want more information on our professional services (strategic planning, MLS regionalization, public speaking, security audits, etc.), please contact Matt Cohen or Gregg Larson. If you want information about Clareity’s security and SSO products, please contact Troy Rech.

Clareity packed a lot of perspectives and content into a ton of sessions over a day and a half – but the Workshop is about more than content – it’s about relationship building. We’ve listened to those attendees with ideas of how to make the event even better – besides the meals together and fun outings on arrival day, the longer breaks have improved the networking possible during the event. Over the past 17 years, MLS executives and their guests have enjoyed our event which, we’ve heard in post-event surveys, is “just the right size” and “full of takeaways.”  We promise to continue to improve the Workshop based on attendee feedback.

Thank you for your support!
mattsretechblog: matt cohen (Default)
2017-07-20 12:00 am
Entry tags:

Real Estate Association APIs and Data Standards

Association management systems have traditionally been silos where most information about the member cannot easily be used to provide a custom per-member experience and solve other business problems. Also, many associations want to use “best of breed” solutions to provide service to members, and the lack of integration with association management systems has made that difficult or limited.

This paper is a starting point for addressing wider industry awareness of existing AMS APIs and what can be accomplished by using them, especially if associations show demand and a developer community can be organized.

This paper will also serve as a starting point for discussing possible business objectives for the APIs – important for assessing whether the APIs under development properly support those objectives.

Finally, this paper should inspire both business and technical experts to engage with the AMS providers at RESO to create a data dictionary of field names that should be common to all AMS and create a common mechanism for accessing that member information.

You can download the paper here: Real Estate Association APIs and Data Standards
mattsretechblog: matt cohen (Default)
2017-07-05 12:00 am
Entry tags:

Preventing Screen Scraping: Policy, Contracts and Technology Evaluation

When organizations create policy requiring screen-scraping and other automated attack prevention and monitoring, it’s important for those organizations to be specific enough to ensure that compliance with policy can be measured in some way.  Indeed, it is equally important for the organization to ensure that their technology contracts contain clear and explicit terms that implement those policies.

If policies and contracts do not contain specific anti-scraping technology requirements, one can easily end up in an argument over whether the steps taken to prevent scraping are sufficient, even if those steps are demonstrably ineffective.  For example, a website provider might implement a
“CAPTCHA” on login and say, “This should be enough to prove the humanity of the user.  It’s not a computer program using the website.”  But, not only are many CAPTCHA tests easy for computers to defeat (it’s an arm’s race!), but if all a data pirate needs to do is have a human being log in and/or complete a CAPTCHA test once per day and have the cookie (containing session information) captured by a computer for use in scraping data, it’s not a very high barrier.  Likewise, an anti-scraping solution might block an IP address as being used by a scraper if the website gets more than 20 or 30 information requests per minute from that address – and while that seems like a reasonable step, these days the more advanced scrapers spin up a hundred servers on different IP addresses and have each of them grab the data from just a few pages, then move those servers to different IP addresses.  Thus, anti-scraping is difficult, and while the mechanisms mentioned above might play a part in a solution, one must include a more comprehensive solution if one wishes to actually have a reasonable chance of stopping the screen scrapers.  Moreover, that more comprehensive solution should be detailed explicitly both in the terms of any contracts executed by the defending organization, as well as in any policies implemented by them regarding reasonable security measures.

Anti-scraping requirements might look something like the following:

The display (website, app’s API) must implement technology that prevents, detects, and proactively mitigates scraping. This means implementing an effective combination of the countermeasures defined in the “OWASP Automated Threat Handbook” “Figure 7: Automated Threat Countermeasure Classes” (reproduced below and available at
https://www.owasp.org/index.php/OWASP_Automated_Threats_to_Web_Applications). Those countermeasures must be demonstrably effective against commercial scraping services as well as advanced and evolving scraping techniques.

The anti-scraping solution must be comprised of multiple countermeasures in all three classes of countermeasures (prevent, detect, and recover) as defined by OWASP, sufficient to address all aspects of the security threat model, including at least complete implementations of all of the following: Fingerprinting, Reputation, Rate, Monitoring, and Instrumentation.

Those fielding displays and APIs requiring anti-scraping technology must demonstrate compliance with the above requirements using technology they have built or a commercial product/service. It must be demonstrated that the technology meets those requirements and that it has been properly configured to effectively address scraping.

Following is some more detail about each countermeasure:

Countermeasure

Brief Description

Prevent

Detect

Recover (Mitigate)

Requirements

Define relevant threats and assess effects on site’s performance toward business objectives

X

X

X

Obfuscation

Hide assets, add overhead to screen scraping and hinder theft of assets

X

 

 

Fingerprinting

Identify automated usage by user agent string, HTTP request format, and/or devices fingerprint content

X

X

 

Reputation

Use reputation analysis of user identity, user behavior, resources accessed, not accessed, or repeatedly accessed

X

X

 

Rate

Limit number and/or rate of usage per user, IP address/range, device ID / fingerprint, etc.

X

X

 

Monitoring

Monitor errors, anomalies, function usage/sequencing, and provide alerting and/or monitoring dashboard

 

X

 

Instrumentation

Perform real-time attack detection and automated response

X

X

X

Response

Use incident data to feed back into adjustments to countermeasures (e.g. requirements, testing, monitoring)

X

 

X

Sharing

Share fingerprints and bot detection signals across infrastructure and clients

X

X

 


The appendix that follows provides more thorough OWASP countermeasure definitions.

Appendix: OWASP Countermeasure Definitions

The following is excerpted from “OWASP Automated Threat Handbook Web Applications”:

•    Requirements. Identify relevant automated threats in security risk assessment and assess effects of alternative countermeasures on functionality usability and accessibility. Use this to then define additional application development and deployment requirements.
•    Obfuscation. Hinder automated attacks by dynamically changing URLs, field names and content, or limiting access to indexing data, or adding extra headers/fields dynamically, or converting data into images, or adding page and session-specific tokens.
•    Fingerprinting. Identification and restriction of automated usage by automation identification techniques, including utilization of user agent string, and/or HTTP request format (e.g. header ordering), and/or HTTP header anomalies (e.g. HTTP protocol, header inconsistencies), dynamic injections, and/or device fingerprint content to determine whether a user is likely to be a human or not. As a result of these countermeasures, for example, browsers automated via tools such as Selenium must certainly be blocked. The technology should use machine learning or behavioral analysis utilized to detect automation patterns and adapt to the evolving threat on an ongoing basis.
•    Reputation. Identification and restriction of automated usage by utilizing reputation analysis of user identity (e.g. web browser fingerprint, device fingerprint, username, session, IP address/range/geolocation), and/or user behavior (e.g. previous site, entry point, time of day, rate of requests, rate of new session generation, paths through application), and/or types of resources accessed (e.g. static vs dynamic, invisible/ hidden links, robots.txt file, paths excluded in robots.txt, honey trap resources, cache-defined resources), and/or types of resources not accessed (e.g. JavaScript generated links), and/ or types of resources repeatedly accessed. As a result of these countermeasures, for example, known commercial scraping tools and the use of data center IP addresses must certainly be identified and blocked.
•    Rate. Set upper and/or lower limits and/or trend thresholds, and limit number and/or rate of usage per user, per group of users, per IP address/range, and per device ID/fingerprint.  Note that this kind of countermeasure cannot stand alone as hackers commonly utilize a slow crawl from many rotating IP addresses that can simulate the activity of legitimate users. Monitoring. Monitor errors, anomalies, function usage/sequencing, and provide alerting and/or monitoring dashboard.
•    Instrumentation. Build in application-wide instrumentation to perform real-time attack detection and automated response including locking users out, blocking, delaying, changing behavior, altering capacity/capability, enhanced identity authentication, CAPTCHA, penalty box, or other technique needed to ensure that automated attacks are unsuccessful. Response. Define actions in an incident response plan for various automated attack scenarios. Consider automated responses once an attack is detected. Consider using actual incident data to feed back into other countermeasures (e.g. Requirements, Testing, Monitoring).
•    Sharing. Share information about automated attacks, such as IP addresses or known violator device fingerprints, with others in same sector, with trade organizations, and with national CERTs.





mattsretechblog: matt cohen (Default)
2016-11-02 12:00 am
Entry tags:

Computer Vision and Improving Real Estate Search

Artificial Intelligence (AI) Creates New Opportunities

Last week I was on a panel at the RESO conference where we talked about software personalization
as an important trend. Back in 2008, I wrote some articles about improving prospecting and real
estate, one aspect of which was that we needed to get smarter about understanding consumer preferences so that consumers don’t have to page through so many listings or can at least see the most likely matches to their interests first. As I noted in that article, the tricky part was that “there are various qualitative aspects of property selection that we don’t currently track data for at the current time.” That’s where “computer vision”, a technology that is becoming both more robust and more common, could make a difference.

“Computer vision” is the ability of a computer to analyze photos and create data out of them. Imagine a consumer likes open floor plans, modern kitchens, wide driveways, high ceilings, mature trees, or lots of natural light – those are all things a consumer might mention when describing their dream home, but little of that information is reliably tracked by agents in most MLSs. With computer vision, that data could be extracted from the listing photos by a computer as keywords by which listings could be searched – without having to manually sort through many homes and many more photos.

I recently saw an example of computer vision demonstrated by RealScout (not a Clareity client) that I was impressed by. I don’t think any company has fully leveraged the potential of computer vision and created the “perfect” product with it, but this company had clearly made some real progress on applying computer vision to real estate search. During the demonstration they showed how they could automatically tag photos, so a consumer could, for instance, page through just the kitchen photos of multiple listings – even if the photos weren’t labeled “kitchen” by the MLS. The technology enables searches for normally unsearchable criteria, and to compare images for key features and rooms side-by-side and roomby-room, as illustrated below:



Images used with RealScout’s permission.

We’ve watched computer vision evolve over the past fifteen years or so – Google’s image search was launched in 2001 and has continued to get more sophisticated, and there are many other companies outside our industry that specialize in it. RealScout is not the only company using computer vision in real estate – during their demonstration they showed how Trulia has used it, and I’ve seen others explore this area over the past few years though not always release the resulting products. There are also quite a number of companies outside of our industry that license computer vision technology – of course, it would have to be optimized for real estate use. And there are some limitations to what the technology can do at this point, especially where listing photos are limited. No one wants their client to miss out on a home because the pictures didn’t highlight a certain feature. That said, this technology – if used artfully – can certainly augment existing listing search technologies and create a compelling user experience.

I have no doubts that our industry will continue to evaluate how to create great listing entry and search experiences using computer vision, and that the number of products – both existing and new –leveraging this technology will grow over time. This is certainly an area to keep an eye on.I have no doubts that our industry will continue to evaluate how to create a great listing search experience using computer vision, and that the number of products – both existing and new –leveraging this technology will grow over time. This is certainly an area to keep an eye on.
mattsretechblog: matt cohen (Default)
2016-09-01 12:00 am
Entry tags:

Reducing the Risk of Real Estate Wire Fraud

The Groundbreaking Definitive Paper on the Subject

Ongoing bad publicity related to real estate transaction wire fraud threatens to damage the reputation of the real estate industry. Previous issue descriptions have been incomplete and have also not provided thorough guidance for those brokerages, title companies, attorneys, and others that engage internally in the wiring of funds or interact with the consumer regarding wire transfers. This paper provides information relevant not just for those directly establishing and communicating wiring instructions, but also for those just acting in an advisory role to the consumer during the transaction.

The paper includes best practices – sample procedures and communications gathered from franchises, brokers, title companies and others. It also includes a sample company policy Clareity Consulting has created based on both these best practices and on our experience advising brokerages and other companies regarding information security. Not all aspects of this paper will apply to all companies involved in the transaction, and what is relevant will still need to be tailored to an individual company’s business processes. Still, it provides a valuable starting place toward reducing the risk of wire fraud.

Please download the paper here:
Reducing the Risk of Real Estate Wire Fraud

mattsretechblog: matt cohen (Default)
2016-06-22 12:00 am
Entry tags:

MLSs Win with Their Business Rules in RETS

RESO Standards Let MLSs Take Control in a New Way

For the past few months I’ve been a bit quieter than usual on the blog because I’ve been working on a number of time-consuming projects including MLS regionalization, MLS selection, strategic planning sessions, and information security audits. But one very interesting project that has been taking some of my time has been an opportunity to work toward a standard for expressing business rules inside RETS. The focus so far has been mostly on listing input business rules, but that focus could expand in the future. This project should be of interest to every MLS, and I strongly encourage MLSs to participate in the ongoing process at RESO.

I first submitted the value proposition for this effort to RESO as part of a business case worksheet back in April of 2010: “MLSs with well documented business rules can more efficiently and smoothly move to a new MLS system, add additional front ends with full functionality or integrate other software that requires use of business rules – without manual work and often inaccurate results. This will result in smoother conversions, more software choice, and enhanced competition and innovation.”  At that point it wasn’t prioritized but in late 2015 I was asked by the RESO Research and Development work group chair, Greg Moore, to lead the charge to come up with a standard for expressing business rules inside RETS.

As part of the process, I gathered knowledge to attack the problem by visiting with a number of MLSs. During that process, sometimes I saw things that made it even more clear how urgent it is to succeed in creating a standard – one that can be understood by MLS staff and not just technical people – and getting it adopted. For example:
  • At several MLSs, when I unpacked what their vendor had ‘coded’ as their listing input business rules into plain English, we found implementation errors. “That’s not how our rule is supposed to work,” became a common refrain during several of my visits.
  • At one MLS, I saw a ‘botched’ calculated ?eld that had been that way for years, simply because no one – not an analyst or an MLS staff person – could review the programmer’s work, looking at it in terms of the business rule that drove the calculation.

Also, because there’s no common way of expressing these rules, it’s hard for MLSs to talk about them, establish best practices, and discuss key differences in how data is validated during regionalization discussions.

Right now, the effort to come up with a standard for expressing business rules inside RETS is a work in progress, but it is moving quickly. So far, the group has agreed to continue down the path of using a well-established business rule language called RuleSpeak and developing a short-hand for the rules expressed in RuleSpeak in what we call REBR (Real Estate Business Rules) Notation. The RuleSpeak structured English notation is perfect for clearly and unambiguously expressing business rules, even complex ones, in non-technical language using business vocabulary. Expressions that MLS non-technical staff can read and validate are the single source of truth when it comes to business rules. Everything else is mediated by someone who is not the business owner, so errors can happen along the way. Following are just a few common RuleSpeak examples. Note that most examples use RETS Data Dictionary names for fields – but I could just as easily have used more user-friendly MLS field labels.
  • An Expired Listing must accept user input up to 15 days after Expiration Date.
  • A Closed Listing must not accept user input. Enforcement: MLS Staff may override this. Data field GarageSpaces must have a value if GarageYN has a value of “Y”. ListingContractDate must be on or before today’s date.
  • YearBuilt must be on or after 1700.
  • ParkingTotal must be greater than or equal to the value of RentedParkingSpaces ListPrice must be greater than or equal to 1, and less than or equal to 50,000,000
  • Status of an Active Listing of Residential Property Type may only change to one of the following:“Active”, “Cancelled”, “Extended”, “Under Agreement”, “Temporarily Withdrawn”. Enforcement: MLS Staff may override.
  •  Listing Status must be set to Expired on the Expiration Date if Current Listing Status is not Expired, Pending, Sold, or Leased.
The REBR Notation mentioned earlier divides all the MLS rules into about a dozen basic syntaxes and, with the documentation we’re working on, it should be easy for MLSs (and their vendors) to articulate the business rules and end up with rules that both people and computers can easily understand – rules that are not specific to one MLS system implementation and that would be documentation of the MLS organization’s intellectual property going forward.

Following is an example of a rule in both RuleSpeak and REBR Notation:

 
Rule stated in RuleSpeak “Structured English”Same Rule using REBR Notation
An Expired Listing must accept user input up to 15 days after Expiration Date.ALLOW_EDIT LISTING YES

IF ListingStatus is Expired AND TODAY = ONORBEFORE (Expiration Date + 15 DAYS) ENDIF

  
None of this language is finalized yet: this is just research happening inside a business rules sub-group of the RESO Research & Development (R&D) group – but hopefully readers will see how valuable all of this can be to them and we’ll see more participation in this part of RESO. If you want to get involved in the group, please email Jeremy Crawford  and ask to be added to the business rules group. If you already belong to RESO, whether or not you are in the work group, you can just log into the RESO collaboration system and get involved with the discussions there too.
mattsretechblog: matt cohen (Default)
2016-05-02 12:00 am
Entry tags:

MLS Regionalization: Beyond the Needs of Any One MLS

It’s Not Just About Overlapping Market Disorder

I was talking the other day with my friend Kevin McQueen about MLS consolidation and regionalization. Both of us help MLSs through the process, and we like to talk and share our experiences with each other in order to help our clients better and move the industry forward. One of the challenges we discussed the other day was that, for MLS regionalization to gain momentum, MLS leadership at every MLS in the country – including boards of directors – need to better understand the need for MLS consolidation and regionalization. Many don’t attend industry conferences and are not aware of the larger strategic issues driving it.

When Clareity Consulting discusses the strategic reasons for MLS regionalization, we often focus on the following objectives:

1.       Reduction in cost

2.       Improvement in MLS product / service scope

3.       Associations can focus more on association functions

4.       Reduction / elimination of arbitrary information and system barriers

o   Reduction of need for multiple memberships

o   Reduction of number of data feeds for participants’ information systems / websites

o   Providing more comprehensive and accurate statistics for overlapping market areas

o   Providing wider listing exposure for sellers

5.       Reduction of the number of systems some members need to learn

6.       Improvement of MLS rule and data accuracy compliance, providing uniform rules and enforcement

7.       Providing efficiency for participants who want to be involved with governance / committees

[Update: these days I have even more goals, including driving technology provider interest in our industry.]

Many MLSs evaluating regionalization on their own initially consider only one or two of these objectives – for example, reduction in cost, or elimination of the arbitrary information barriers – and don’t evaluate the decision against all the items listed above. As a result, they might conclude something like, “We’re geographically isolated so we don’t need to consider regionalization.”

But even the list of objectives above is incomplete, focusing on local and regional needs rather than the larger threats facing the MLS industry. If one looks at the NAR’s D.A.N.G.E.R. Report, one should consider how the current splintered MLS industry is – or is not – ready to deal with the threats detailed in that report, including, but not limited to:

  • Entry by a New Player
  • Unclear End Result (Loss of Control)
  • Control of a National MLS
  • Decentralized Infrastructure Becomes Obsolete
  • Large Patent Troll Attack
  • Security Breach 

A more consolidated MLS industry would be better able to mitigate these risks.

Back to my conversation with Kevin. He asked, “How do we reach executives and board members at association/MLSs that are resistant or uninformed about the possibilities for regionalization?” Kevin suggested one way was that we could speak on the subject more at conferences. But, so many of the people who need to be reached don’t attend these conferences, and certainly wouldn’t attend a session on regionalization if they’ve already made up their mind on the subject.

We also discussed NAR mandating NAR- or CMLS-developed best practices for MLSs. While the core standards approach NAR took with associations could be useful, it leads to a very slow, incremental approach that may have been appropriate 20 years ago but is too slow to meet today’s challenges. Based on the MLS regionalization end-game described in my recent Inman article, NAR could simply mandate standards for MLS that do meet the condition of the end-game and initiate a fast process to get us there. But is a top-down mandate approach the best one?

Kevin and I both believe that the best approach is a collaborative one, where association and MLS leadership engage in a consensus-driven process for regionalization. Clareity recently outlined this process recently in an Inman News article, republished here: “MLS Regionalization – Breaking Through” Are the threats to the industry and the benefits of MLS regionalization becoming clear enough that initiative momentum will radically increase? Will leaders take an active role in designing the best possible future for their organizations and the industry at large? Or will they continue to focus on their own organization and ignore what is ultimately best for their members and the industry? Or will they wait for one of the worse threats from the D.A.N.G.E.R. Report to occur and make all of this irrelevant?






mattsretechblog: matt cohen (Default)
2016-04-25 12:00 am
Entry tags:

MLS Regionalization: Breaking Through

Consolidation of MLSs Requires Process

In part one of this article ("MLS Regionalization: Setting the Goal") Clareity outlined the criteria for determining the future “end game” for MLS consolidation. In this part, we will describe Clareity’s process for MLS consolidation and regionalization and how we overcome some of the common objections to consolidation during that process.

A good process for MLS consolidation and regionalization has four parts: planning, decisions, formalizing decisions, and actualization:

1. PLANNING

In the first part, planning, organizational leaders meet with a facilitator who can drive consensus on the hard issues, including goals, ownership and governance, money flow, leadership, staffing, and the product and service offerings. The facilitator provides examples of how decisions in these areas have worked in other organizations and captures the group’s consensus in a document which all participants approve of, so there is no backtracking later. The leaders may consult with their boards of directors during this phase and work to sell the consensus plan. There are other decisions that will need to be made along the way, such as specific technologies, but the above decisions are the one that will set the framework for the long term, while technologies come and go. Some groups want to focus on cost right away, but how can cost be discussed when no decisions have been made yet about the factors that drive it – leadership, staffing, products, and services? And how can one make decisions about those things until a decision-making structure has been put in place? A successful planning process is all about asking the right questions at the right time.

2. DECISIONS

In the decision-making phase, the leadership of all stakeholder organizations meet together to discuss areas still lacking consensus. Having group meetings is an important part of the process because it is an opportunity to address many remaining fears, ensuring all valid issues are on the table. The facilitator can provide perspective and knows how to address common objections. The group must have trust in the process, building trust that they are all working toward a common goal: a better MLS that serves all of the subscribers in the region well. In this phase, the group can make more definitive decisions based on the initial planning, which the facilitator captures.

3. FORMALIZING DECISIONS

Next, the facilitator will use the documentation created in the previous step as the basis of a business plan. All of the planning and decisions will be incorporated into this document. A draft budget, a plan for the next steps, and a timeline for regionalization will be developed and included as well.

4. ACTUALIZATION

The final step, actualization, involves creating the company, addressing all of the legal issues, commencing initial and ongoing communications, selecting technology and contracting (or re-negotiating) as needed, and implementing MLS system changes as needed. Having top-notch legal counsel is critical in this phase, and Clareity Consulting likes to collaborate with the best in the business.

OVERCOMING OBJECTIONS

There are usually many questions and fears about MLS regionalization that must be addressed along the way. Sometimes agents worry that competitors from the adjoining MLS will sell out of their traditional area and create problems, and they need to be reassured that this has not been a serious issue in regional MLSs that have formed in the past. Other times, MLS executives and staff fear for their jobs, or board members worry about the continuation of their leadership roles –worries that can be addressed by discussing the role of service centers in the new organization and creating a plan for merging leadership. Some will worry about strife between associations in a regional MLS but having strong bylaws and intellectual property agreements can minimize that risk. Revenue traditionally shared with the association can also be a concern that can be addressed in a variety of ways and Clareity’s CEO, Gregg Larson, described one such approach at Clareity’s MLS Executive Workshop. The point is that common concerns about MLS regionalization can be addressed as a part of the process, and such concerns shouldn’t stop the process from happening.

SUCCESS

With a sound process and proper facilitation, organizations working together can demystify and accomplish MLS consolidation and regionalization. Once fears are put aside and the MLSs commit to engaging in the process, it is generally possible to address stakeholder issues and concerns, achieving the goal of having a single MLS with strong capabilities that covers an appropriate geographic area.

 

 






mattsretechblog: matt cohen (Default)
2016-04-19 12:00 am
Entry tags:

MLS Regionalization: Setting the Goal

Aim Before Your Fire

Why are there still so many MLSs? I’d argue it’s mainly because we don’t have the answer to other questions:  How many MLSs should there be, and where are their borders? Should there be six MLSs? 30? 60? 100? One?  Can anyone be held to account for not meeting a goal that has not been set? Before we consider how to achieve a goal that will enable consolidation, we need to know what that goal – the “win condition” – is.

Clareity Consulting is studying the MLS regionalization “win condition”.  We believe that the industry first needs to understand what the consumer considers to be a natural market area. If someone gets a job in Manhattan, New York City, they may end up living in a house in that borough (2 MLSs), one of the outer boroughs or Long Island (several other MLSs), take the train up to Westchester or Connecticut, or out to New Jersey (even more MLSs). How can an agent serve his or her customer when he or she can’t set up a single prospect search in the MLS system, since the data is spread out over nearly a dozen MLSs? The situation is even worse when MLS geographies overlap, or a property is on the border of more than one MLS. In this situation, agents can’t find all the CMA “comps” they need in one system. If an MLS doesn’t cover the natural market area – including overlapping and adjoining areas – it is doing its subscribers and their clients a tremendous disservice. How can that be justified in today’s world where real estate portals have no boundaries and consumers are free to search everywhere?

There are other criteria that can be looked at in order to evaluate the “win condition”. Can the very smallest MLSs – even if they are isolated geographically – meet reasonable standards of service? NAR hasn’t developed MLS core standards as they have for associations, and it is high time that it did so. CMLS did a great job summarizing MLS Best Practices.  Perhaps they could establish the core standards.  Clareity can easily imagine core standards covering compliance management, data standards, support, technology, data licensing and distribution, and participant data access, as well as security and privacy. Can those smallest MLSs provide that service at a reasonable cost? Another criterion for the win condition is whether an MLS can meet the needs of large brokerages that currently must belong to and aggregate data from multiple MLSs.

...

Since this article has focused mostly on listing data, one might reasonably ask, “Can’t MLSs just share data? Do they really need to consolidate?” There certainly are cases where that might be sufficient, but MLSs need to evaluate their goals before they consider that answer. Do they want to reduce number of systems some members need to learn and pay for? How about providing consistent MLS rules and data accuracy compliance across the natural market? What about providing a single copyright / IDX notice for websites? Must a broker belong to many MLS boards to affect policy in their market, or can efficiency be provided in a single MLS? Are there economies of scale that are needed to provide the best service at the lowest cost to subscribers? Often data shares are not optimal because they add additional overhead, inject delays in getting the listings into the repository and into partner systems, and/or have problematic source data differences between the local systems. A data share may be a good solution, but careful evaluation is needed to determine if that’s the right approach for the MLS consolidation end-game, or if further consolidation is warranted.  Data shares can also be an excuse to simply maintain the status quo when the right thing to do is consolidate.

We’re not going to see substantially faster progress in MLS consolidation until two things happen:

  1. Brokers get behind consolidation and/or demand it, especially in obviously overlapping markets, and
  2. MLS boards of directors openly discuss the future of MLS in terms of the types of business objectives discussed in this article, setting goals based on these business objectives, and planning for them. 

A process for overcoming barriers to creating a regional MLS will be described in a blog post titled, “MLS Regionalization: Breaking Through"

mattsretechblog: matt cohen (Default)
2015-12-12 12:00 am
Entry tags:

Standards Can Put Brokers in the Lead Management Driver’s Seat

Why Brokers Need to Get Involved in Real Estate Standards

Brokers are just starting to get involved in the real estate standards efforts at RESO but haven’t yet begun to push for any strategic initiatives. Earlier this month I posted about how brokers would benefit from RESO coming up with standards for measuring how advertising site visitors are interacting with listings and I hinted that the next strategic frontier for brokers at RESO after creating those interaction standards could be creating standards for transporting lead information.

Just recently, I was reading about one franchise that parses lead emails to automatically put leads into an agent’s contact database. That approach is unreliable, as email formats may change. Also, the approach doesn’t scale well to receiving leads from all the places from which leads may be generated. Currently, leads are moved around via email and also via a hodge-podge of proprietary APIs. This is a problem that is solvable, and if done right, it can really put brokers in the lead management driver’s seat, in a way they never have been before.

First, let’s look at what we want to move around. The typical lead includes:

• Contact information. There’s an existing RESO standard for this.
• Additional lead information (e.g. Buyer or seller? How soon are they ready to buy/sell? Information request details. Etc.). There’s not yet a RESO standard for this.
• Saved searches There’s an existing RESO standard for this.
• Interaction with listings. There’s not yet a RESO standard for this, but a work group has been formed.



There may be additional categories of information that could form the lead, but the ones above would be a good starting point for discussion. Once “what” information comprises a lead is established and standardized, we can turn our attention to creating a standard for “how” that information is transported. This is where it gets interesting! First, we must take a step back away from the technology and ask the business question, “Who controls where the lead information goes in different contexts?” For example, a broker may have a lead management system and want portal leads to go into that system. But what if that broker is a RE/MAX broker, and the franchise has done a deal with Zillow to get those leads into LeadStreet? What about if the agent has a preference for how they want to receive leads? Maybe they don’t want the lead going into any type of system and just want a quick text message and/or email because that’s how they can handle the lead most effectively. How will it be determined who can make those decisions with the portal sites and other lead sources? There may be variation in what data payload is desired in different contexts, and also that the recipient may want data sent to different places – for example, into the MLS for easy prospect setup, into a CRM / marketing system, and into a lead management system. This information could also be used to create an elegant bridge for the consumer from an advertising portal to a broker site/app experience – imagine the consumer not needing to re-input their searches and favorite listings at the broker site!

For brokers and other stakeholders, the opportunity to establish standards for leads is tremendous. Brokers have been joining RESO in recent years and I’m looking forward to their increased active engagement in helping set priorities for the standards process.
mattsretechblog: matt cohen (Default)
2015-12-01 12:00 am
Entry tags:

Critical Unasked Questions about Measuring Real Estate Online Advertising

Brokers and agents need data on how advertising site visitors are interacting with listings, and there are a number of players in the real estate industry competing to provide this data. Each provider is tracking different things in different ways, and getting different advertising sites to implement their tracking mechanisms with little overlap. The way things have been going, I don’t believe that any of these competitors will “win” and achieve 100% adoption. That means brokers and agents are not going to win and be able to give sellers a complete view of what’s happening with the online marketing of their properties. Is there a solution to this problem?

What to Measure?

A long time ago, website owners just tracked “hits” to their website. Then, they got a bit more sophisticated, and started asking how many unique visitors there were. Then, at least in real estate, we started thinking from a business perspective and realized we needed to provide that type of information at the listing level, so it could be reported back to sellers. That was great, for its time. Now we’re at a point in most industries where e-commerce advertising effectiveness is tracked all the way from e-mail to website visit to sale, and expectations of what can or should be tracked in our industry are likewise evolving.

ListHub, for example, now tracks the following events where their tracking code is implemented:
  •     A property was displayed in a list of search results.
  •     Viewed the details of a single property.
  •     Sent an email to an agent.
  •     Sent an email to an office.
  •     Placed a phone call to the agent.
  •     Placed a phone call to the office.
  •     Clicked on the agent’s website of a listing.
  •     Clicked on the office’s website of a listing.
  •     Followed a listing or saved it to favorites.
  •     Clicked the virtual tour link of a property.
  •     Clicked to see more photos of a property.
  •     Clicked the video link for a property.
  •     Printed details on the listing.
  •     Clicked the property map.
  •     Clicked to get directions to a property.
  •     Shared a listing via social networks (which site specifically?) or email.
It looks like ListTrac is tracking many of the above items where their code is implemented, and is additionally tracking:
  •     Showing requests
  •     User registration information
There are lots of other folks tracking this type of information. In this post, I’m not going to try to be comprehensive about all of them and what they’re tracking.

Of course, we want to know where (on which website or app) and when each of these events took place. We may also want to know if it’s a professional view or a consumer view, or via a category such as MLS (back-end or prospecting/client collaboration), IDX, VOW, or advertising portal.

It’s also ideal to track who the consumer is, with as much specificity as possible. Consider everything Google tracks in its analytics – from demographics (age/gender), interests, language and geographic location, behavior (new vs. returning, frequency, engagement), technology (browsers, OS, etc.), mobile vs. desktop, and lots more. Note: when you have your reports designed, don’t let the geeks clutter things up with information useful to site designers and on the business end once in a while but not needed for regular reporting to agents and clients. Still, it’s good to collect this type of information, and agents would want even more if they could get it.

How to Measure?

There are many ways to collect all of this information, and the method of collection is driven by what we want to collect and where we want to collect it.The simplest way to collect some information would be by embedding a reference to a 1 pixel “invisible” (transparent) image with URL parameters that provide additional information. The image could be embedded in websites, emails, etc., and the server logs analyzed to create statistics. Unfortunately, most e-mail clients don’t download images by default, specifically to thwart this type of tracking. Also, browsers may cache the image, making this an unreliable form of tracking. Finally, while an image is good for tracking basic “hits,” technical limitations of this method mean that we can’t be as sophisticated as desired regarding “what” and “who” we track.The other way to collect information is to embed JavaScript code in the website, kind of like Google Analytics. There are a number of issues with this approach:
  •     It involves client-side code embedded in, or referenced from the page.
    •         The code and its interactions can affect page load and can slow down the user experience – especially if there are multiple tracking codes that have to be implemented.
    •         Referenced code can be / and has been changed by the third party controlling the code to collect undocumented information which may be inconsistent with the site’s expectations and what it communicates to users via its privacy policy. It can even hypothetically allow that party to interfere intentionally with the user experience (for example, generating a pop-up message). This lack of control may be unacceptable to site owners.
  •     Tools keep evolving for blocking front end / third party site tracking scripts and cookies.
  •     Advanced tracking involves making many additional changes to the website – not just an easy cut and paste of some JavaScript code. This, coupled with the need to currently implement various competing tracking codes (at least at the present time!), is completely untenable for website developers. This combination of issues, if nothing else, is currently the “nail in the coffin” of this approach.
  • The pushback on this method has been significant, with quite a number of players indicating that this will happen over their dead bodies. Again, if we can’t figure out a method that works for everyone, this isn’t going to be successful other than as a fallback position for less technically capable local website developers.

I’ve heard one tracking company say that the third-party JavaScript approach is the only one that will work and that having code located on their own servers running client-side in the web browser allows them to verify there is no statistical manipulation from the publisher. I understand that argument, but find it very disrespectful of the publishers, especially since I’m not aware of any publishers “cheating” in a way that merits this kind of distrust. Many publishers do facilitate some “cheating” via view/click-fraud by not stopping “bots” – as has been discussed on this blog previously and which can’t be ignored for much longer – but that’s something the publishers have to deal with irrespective of tracking mechanism selected. The discussion must be had about how to address all of the other issues bullet-listed above if the third-party JavaScript method is even to be considered.

The alternative to the third-party JavaScript method is for sites to implement a RESO-standard API – both a “dictionary” and “transport” – for tracking, a standards effort that is currently in progress. Implementation might also include the creation of JavaScript libraries to collect some of the information and these types of libraries could be open sourced to facilitate adoption. Using a back-end API method would address most of the third-party JavaScript issues above and allow site developers to implement tracking using a single mechanism. It should also allow us to address at least the first of two critical, but, before now, unasked questions…

The Most Critical – But Unasked – Questions

The first critical question that must be asked is this: Who Gets the Measurement Data? There is an underlying assumption, the way tracking is mostly done today, that the party that collects the information is the party that gets to use it. I assert that such a tie can and should be broken. There’s no reason that tracking data, in a standard format, couldn’t be sent to a variety of report providers, encouraging competition and innovation in that space and allowing more of an opportunity for consolidated reporting regardless of the chosen platform.

Perhaps, just as brokers and agents today can decide where their listing information goes – IDX, specific publishers, etc. – they should also be able to decide what happens to the measurement data for the listing. For instance, brokers may want the data to just go to their back-end, where they have a great report for sellers that includes advertising effectiveness. Other brokers may want the data to go to their MLS, where there may be some basic reports available to agents. Still other brokers may want it to go to any number of companies that can provide unique reports (some possibly for a fee) to their agents. Finally, some may want it to go to many or all of the above. But, if one puts the brokers and/or agents in control over this data, and not every tracking report provider gets all of the data, what about reports showing traffic to a listing relative to “comparable” listings’ traffic? That wouldn’t be possible without all (or almost all) of the data. Maybe we would need to come up with something like broker reciprocity for this type of data.

Regardless, the point stands that there can be, and probably should be, various providers of reports, and simplifying collection of the data and providing a standard way of sending it to more than one provider is a worthwhile goal.

The second critical question that must be answered is this: “What Can They Do with the Measurement Data?” Website and app providers have – and must follow – privacy policies that they establish for the collection and use of user data. Putting website providers back in control of collection of data allows them to ensure that their policy is aligned with what they collect and share. But, once they share the data with report providers, who can say what those report providers can do with that data? Already, reports have surfaced of one report provider selling the data in unexpected ways. We must address this question before it imperils us. Perhaps one of my industry attorney friends will take up this question; it’s really more in their bailiwick then my own.

Tracking Currently



The Way Forward

As we answer the questions above, working on the technical, legal, and business issues surrounding the tracking of listing-related user activity, I anticipate new questions for debate. For example, consider all the issues surrounding tracking unique visitors between various websites and applications! Another intriguing possibility would be the creation of a RESO standard API for lead information, but I am getting ahead of myself.

First things first: if you have an interest in the issues above, I’d suggest getting involved in the Real Estate Standards Organization (RESO – http://reso.org), and if you want to be involved in the creation of standards for tracking, become actively involved in the work group. Alternatively, if you travel like I do and can’t make it to many meetings, write posts like this and bring them to the attention of those on the work group. Otherwise, there’s no complaining later!
mattsretechblog: matt cohen (Default)
2015-07-09 12:00 am

Fixing the Zestimate

Suggested Changes to Reduce Criticism and Improve Industry Relations

The Zestimate – Zillow’s AVM-calculated house price – has been a double-edged sword from the beginning. On the one hand, it gives property owners, buyers and sellers a comforting sense of empirical certainty – that an abstract, passionless computer algorithm, has declared from on high what a property should be worth. And it’s good entertainment for the general public. On the other hand, the Zestimate is not actually an empirical reflection of the value of the house. It is not an appraisal, a BPO, or even based on a casual walkthrough. It is simply a mathematical model that pulls together data such as the number of bedrooms, number of baths, comparable homes sold locally, and a dollop of Zillow’s special sauce. As such, it is simply not very accurate. Zillow explains just how inaccurate it is here:
http://www.zillow.com/zestimate/#acc (http://www.zillow.com/zestimate/#acc – but I don’t believe many people read and understand that document. And Spencer Rascoff has spoken many times about the context in which the Zestimate should be placed – but I don’t believe most people who see the Zestimate have ever heard him speak about it.

Instead, people believe in the Zestimate that they see on the listings on Zillow.com. One reason, which I’ve already mentioned, is that it gives people supposedly objective backup for what they would like to believe about the value of a house. Another reason, perhaps both more insidious and effective, is that it gives the impression of extraordinary precision and accuracy. In the Star Trek episode, Return to Tomorrow, McCoy asks Spock how many miles of rock they will have to tunnel through; Spock replies, “Approximately 112.37 miles.” Anyone who can give the answer to two decimal places, right off the top of his head, must be a genius. The rock layer must be between 112 and 113 miles thick, even if McCoy’s tricorder is only accurate to plus or minus twenty miles. Zillow has it both ways; it has its page of fine print indicating the wide range of values a Zestimate may represent, certainly not read by everyone who visits the site, and it has the minutely exact figures it displays on its many pages that everyone sees. If Zillow says a house is worth $182,379, one imagines the house is worth between $182,000 and $183,000. It’s the nature of seeing such a precise number. Now, if Zillow were to just display the range for the house ($170-$195K) and a “confidence” rating next to it – as most professional AVMs do – the confusion and controversy around the Zestimate might go away.

The way the Zestimate is presented currently, it causes a variety of issues. Some sellers end up bullying their agents into going along with using this beautiful, exact number they found online, and many agents will go along with this to get the listing. Arguing about why this number isn’t useful in pricing a listing wastes the time of professionals, and it may harm both sellers and agents who price their home too high or low after seeing the Zestimate. It also could harm buyers that don’t put a bid in on a house that is priced right but which they think is overpriced because of the Zestimate. We all want this number. But this number, at least as currently presented with its deceptive precision, is bad for everyone. Let’s just consider that.

I think I understand why Zillow likes the Zestimate. Pseudo-accurate Zestimates bind customers to Zillow’s services. Zestimates represent product differentiation at its finest, especially when it comes to the impression of accuracy that no other portal claims to have. When buyers and sellers engage real estate professionals, Zestimates give them a sense of information asymmetry — they can “turn the tables” on their professionals with the knowledge they’ve gained online. It gives some a false feeling of power, and others a misperception that the professional is less valuable.

If the Zestimate as it is today were scrapped, and a price range and a rating of confidence in that range were displayed either on its own or, as a compromise, in addition to the specific number, this would place the locus of knowledge and experience back with the Realtor®. The buyer or seller would be asking the Realtor®, “What does this range mean? Where do you think this home actually falls between the top and bottom numbers based on the condition of the house?” Changing the Zestimate to a price range and confidence rating would be good for buyers and sellers, because they would have more realistic if not more definite ideas about prices, and because it would cement rather than distance the relationship they have with their agents, whom Zillow and I both agree are still necessary to the home-buying process. Meanwhile, by modifying this feature, Zillow could eliminate one of the biggest criticism magnets regarding the site. To Zillow, are these benefits worth its taking its perceived omniscience down a peg, and somewhat lessening its product differentiation? I would argue that they are. I’m sure the people at Zillow have considered this approach, but I think it’s worth continuing to reevaluate.

mattsretechblog: matt cohen (Default)
2015-07-07 12:00 am
Entry tags:

NAR’s D.A.N.G.E.R. Report – Focusing on the Security Risk

Understanding the Risk and How to Address It

In the D.A.N.G.E.R. Report, compiled for NAR by Stefan Swanepoel, a “security breach” is listed as a “high risk” for MLSs, though it is quite clear that the risk is to the industry more generally. According to Stefan, “Cyber criminals could attack the industry, breach the MLS, and cause disruption … as transaction management systems and mortgage systems are added or integrated, this threat becomes more serious.”

Only some leaders in our industry are taking this risk – and their responsibility – seriously. Everyone knows how important installing security updates (“patching” or “upgrading”) is, but between 28% and 46%, depending on industry segment, don’t take even that basic, no-cost step for their web servers, even though their organization’s websites are often a gateway for services for many stakeholders. Following is a chart of different industry segments, showing what percent were running unpatched servers with known vulnerabilities as of January 2015:


We’ve got to do better than that. So, where does the industry need to focus its attention?

Security Assessment

Security risk assessment is the first step. Clareity Consulting has been helping clients with security risk assessments for the last thirteen years. Organizational security is not something executives should just be handing off to their technical staff, since many of the most important and foundational areas of security are non-technical. Vetting new staff and training them in how to deal with data securely is critical. Writing good policy and procedure documents makes it clear to employees, including both technical and non-technical employees, what their responsibilities are in securely operating your business. Detailed contracts extend these responsibilities to those who provide you with services, and to independent contractors. Also, physical security is extremely important – everything from making sure access is granted only to authorized people to making sure that sensitive information is handled properly and disposed of both timely and securely. Areas of more technical evaluation include routers and firewalls, wireless devices, virtualization environments, websites and applications, installed software, use of encryption, server and workstation operating systems, mobile device configurations, printers, copiers, and much more.

Payment Card Industry (PCI) Data Security Standard (DSS) compliance is also necessary in order for your business to process credit card information. The latest Payment Card Industry Data Security Standards (PCI DSS) now require compliance from organizations whose websites redirect their e-commerce transaction-processing functions to a third party, even though these organizations are outsourcing their credit card functions to that third party.

Having an in-depth knowledge of how information flows in our industry as well as both the business and technical aspects of electronic transactions enables Clareity Consulting to help organizations with this kind of risk assessment most efficiently.

Authentication

The D.A.N.G.E.R. Report focuses on login authentication applied to MLSs and integrated systems such as transaction management and document management. Of course, only weak authentication is in place for broker systems that are also integrated with such systems, and for the direct logins to those systems. Only about half the industry has implemented strong authentication for the MLS and integrated products and, in some cases, even those that have implemented strong authentication for the MLS have not required that all licensed and/or integrated software integrate the stronger measures.

Our industry has a complicated authentication problem to solve. Most login security solutions are not designed to work when the users actively wish to share their account. When mobile professionals have unusual computer use patterns and share computers in a “bullpen” situation, it is even more difficult to provide an effective security system. Even when an MLS has extensive fines for account sharing, we have found that subscribers are quite willing to try to share their login account with non-subscribing professionals, consumers, and others. Most MLSs prefer an authentication solution that focuses on stopping long-term account sharing to one that may cause some inconvenience to users but more reliably prevents an unauthorized user from using account information to access an account even once.

Clareity Security, the leading provider of strong authentication to the industry, has an amazing array of technology that can be used to provide more proactive security, including an exciting new biometric method for mobile devices that is patent pending. Even the current technology Clareity Security has in place at MLSs can be configured to be more stringent, but the industry has to have the will to implement it in that way. Hopefully the industry will move in that direction before there is a publicized security breach.

Clareity Security has always recognized the need to balance cost and convenience with security. That is why Clareity Security has built a RISK based scoring system that has been fully customized to real estate use cases, ensuring that when strong authentication is necessary based on risky behavior and/or application sensitivity, it can be implemented.

Clareity Security also leads the industry in the best practice implementation of secure single sign-on (SSO). Using the SAML (Secure Assertion Markup Language) standard, the company has provided a way for hundreds of applications to be SSO integrated while still maintaining good security. I have also seen some very bad implementations of SSO that create tremendous risk for the organization because they have little to no security.

There are new, emerging authentication challenges as well. For example, it is very difficult to protect authentication to APIs, especially those that are designed to be used by mobile applications. This is an issue that I presented at a RESO meeting in 2013 and, to date, only limited protective mechanisms have been developed by some vendors.

[UPDATE: Clareity was acquired by CoreLogic in 2017 and both authentication solutions and professional services are provided under that brand.]

Screen Scraping

Though consumers are providing their agents with information solely for the purpose of marketing their property for sale, it’s very difficult to ensure real estate information is being used for only that legitimate purpose, since our industry sends the content to so many locations online. One of the biggest challenges is “screen scraping”, when someone copies large amounts of data from a web site – manually or with a script or program. There are two kinds of scraping, “legitimate scraping” such as search engine robots that index the web, and “malicious scraping” where someone engages in systematic theft of intellectual property in the form of data accessible on a website. Malicious scraping has been a severe problem in the real estate industry for a decade or more. Bad actors and their software “bots” can grab MLS data and resell it or use it for their own purposes. The very largest sites, such as Realtor.com, have invested millions in anti-scraping solutions, and these solutions have to be constantly updated as scrapers become ever more sophisticated. However, as some sites take steps to protect the content, that pushes the scrapers out to harvest content from other sites that are less well protected. For our industry, Clareity Consulting recommends a company it has worked with (and is currently consulting for), Distil Networks, that provides a robust and scalable technology solution at a cost that even an individual agent can afford for their site. Distil Network’s solution has so far been adopted by one MLS vendor, some IDX vendors, and some individual VOW sites. While some advertising portals have implemented anti-scraping measures as well, the industry has a long way to go to protect its content both in the MLS system and its public facing modules (client collaboration, framed IDX) and many of the other locations to which content has been syndicated.

[UPDATE: In the years since this article was written there other good anti-scraping options have emerged such as Incapsula and Akamai Bot Manager.]

Next Steps

Information security is not something where one can “set it and forget it”. New challenges emerge regularly. The policies and procedures, contract terms, and even the security audit tools I used even five years ago have changed radically – they are always changing. We’ve got to get both authentication and screen scraping under control. If you’re an executive in this industry, whether of an association, MLS, broker, or software provider, it’s time to move faster and press harder on security. Start with assessment. I can help with that. Then we can figure out the next steps for your organization. Just remember, security isn’t something you achieve: it’s an ongoing practice.

mattsretechblog: matt cohen (Default)
2015-04-27 12:00 am
Entry tags:

Security Assessment: Demystifying the Process

Don’t Be Afraid of the ‘Audit’

Information security laws are still a patchwork nationwide, but an increasing number of industry organizations are finding that they need our assistance to comply with laws and other applicable requirements. The purpose of this post is to help demystify the process.

Many industry organizations are just realizing all of the requirements that apply to them. For example, the latest Payment Card Industry Data Security Standards (PCI DSS) now require compliance even from organizations that outsource their e-commerce transaction-processing functions to a third-party service provider, but where the organization’s website controls the redirection to that provider. Understanding those types of compliance requirements, as well as those mandated by federal, state. or provincial legislation, is a starting point for any assessment.

A lot of clients take comfort in my approach to assessing security. For many organizations, especially for the IT people, a security assessment can seem daunting. What if issues are found? Will we look bad in front of the assessor? All I can do is assure clients that I’m there to help, not to judge. Yes, typically issues are found, especially if it’s an organization’s first assessment or if they haven’t had an assessment in a few years. Finding issues is simply the first step toward addressing them. Some assessors spend most of their time by themselves using their assessment tools, then presenting a report that feels like an adversarial “Gotcha!” to clients. I take a very different approach, one where I’m working alongside my client, using tools and checklists in collaboration with them. This has some important benefits. Without there being any surprises at the end of the assessment, there’s less of an adversarial feeling. Also, by educating my client in the use of common and free (or inexpensive) assessment tools and other security resources, they become empowered. IT staff are left feeling more educated and valuable, rather than feeling defeated by an outsider finding issues. The best part is that, by empowering my clients to be able to perform at least some level of ongoing self-assessment, they are more likely to maintain better security in the long run.

Before the visit: Typically, I schedule two to three days for a visit with my client. A few weeks before the visit, I ask for any security-related information the organization might have: a list of websites and apps, information security policies (usually a part of an employee handbook), a list of third-party service providers and parts of contracts that are relevant to security, and the office and data center internet address (IP) ranges. If some of that information isn’t available, that’s okay. If I’m going to do any testing of applications hosted by third parties, at that point I need my client to coordinate that testing with their service provider. Then I review the materials provided and perform some initial “external” testing prior to my visit. If assistance with PCI DSS compliance is requested, I work with my client to start that process as well.

During the visit: I like to start discussions with management – looking together at staffing practices, physical security, policy and procedure, contracts, and other less-technical aspects of security. Then I dive into the technology with the staff (or sometimes contractors) who are responsible for managing it. Together, we’ll look at everything from routers and firewalls all the way down to the operating systems, and everything between. If PCI DSS compliance is in progress, we will review any outstanding questions my client needs assistance with. At the end of the visit, if everyone is available, I like to bring everyone involved together to discuss findings and the process of planning issue remediation.

After the visit: Sometimes there are subsequent discussions of findings after the visit. Also, I provide a lot of phone and email support and follow up, to ensure that the organization is efficiently moving forward in their efforts to improve security and to answer questions that arise along the way.

Hopefully this post has demystified the security assessment process. When it comes to information security, our industry has a lot of work to do. I benchmark the industry regularly in a number of ways. One small measure I take is “What percent of websites are running on known insecure web server platforms?” My present benchmark for that measure is: 28% of the top 50 MLSs (by subscriber count), 46% of the top 50 brokers (by transaction volume), 40% of top local associations, and 35% of our state associations. That measure is just the tip of the iceberg, too –again, there’s a lot of work to do! Contact me (612-747-5976), and let’s start working on it together.