The US housing market has a valuation problem that most people outside mortgage and insurance underwriting never think about.
Transaction volume collapsed from its 2021 peak and has stayed low. Elevated mortgage rates froze move-up buyers in place. The result is a market with far fewer recent comparable sales than traditional automated valuation models were designed to work with. When there are not enough recent comps in a neighborhood, AVM accuracy degrades. In low-liquidity markets, it can degrade significantly.
ATTOM, the property data company covering 160 million US properties, launched a rebuilt AVM on May 5, 2026, designed specifically to address that problem. The new model replaces comparable-sales dependency with an AI-driven architecture trained on more than 30 years of time-adjusted transaction history, generating valuations across 98 million US properties with a median absolute percentage error of 2.9%. More than 80% of valuations fall within 10% of the actual sale price.
For mortgage underwriting, insurance pricing, and portfolio risk management, that accuracy threshold matters operationally.
Why traditional AVMs break in low-transaction markets
A traditional AVM works by finding recent sales of similar properties nearby and adjusting for differences in size, age, condition, and features. The approach is intuitive and works well when transaction volume is high. It fails when it cannot find enough recent comparable sales.
The US housing market has been in exactly that condition since 2022. Existing home sales fell from roughly 6.5 million annualized units in early 2022 to under 4 million by 2023 and have remained depressed. In many submarkets, particularly in suburban and rural areas where turnover was already limited, the comp pool has become thin enough that traditional AVMs produce unreliable results.
For lenders processing mortgage applications, an unreliable AVM creates a problem at scale. Either they order more expensive appraisals to compensate, slowing the process and adding cost, or they accept valuations that carry higher uncertainty into their underwriting decisions.
ATTOM’s rebuilt model approaches the problem differently. Instead of looking for recent comparable sales, it models how each neighborhood has evolved over 30 years of transaction history and uses that temporal context to translate historical pricing patterns into present-day values. The model learns from the relationships between property characteristics, local market dynamics, and price history rather than requiring fresh transaction data to function.
The confidence score and what it enables
Each ATTOM AVM valuation comes with a confidence score indicating the reliability of that specific estimate. That is not cosmetic.
In automated underwriting and risk management workflows, a confidence score attached to each valuation allows organizations to set rules about when to accept the AVM output automatically and when to escalate to human review or a traditional appraisal. A high-confidence valuation on a straightforward suburban property in a market with adequate historical data gets processed automatically. A low-confidence valuation on a rural property in a thin market triggers a different workflow.
According to the Federal Housing Finance Agency’s AVM regulation framework, quality control standards for AVMs used in mortgage origination require that models include measures of confidence or reliability alongside their estimates. The ATTOM model’s confidence scoring is directly aligned with that regulatory direction.
Who uses this and what they do with it
ATTOM is selling this to mortgage companies, insurance providers, real estate investment firms, and proptech platforms. The delivery options, API, bulk licensing, and cloud platforms including Snowflake and Databricks, reflect an enterprise customer base that is integrating AVM data into automated workflows rather than looking up individual property values manually.
For a mortgage company processing thousands of applications per week, an AVM that performs reliably in low-liquidity markets reduces the number of fallback appraisals required, lowering cost and processing time. For an insurance company pricing homeowner policies across a national portfolio, a 2.9% median error model provides a more reliable replacement cost basis than alternatives that break down in thin markets.
The 30-year historical foundation also addresses a specific use case that standard AVMs handle poorly: properties that have not transacted recently. A house last sold in 2015 in a neighborhood with few recent sales is exactly the kind of property where comparable-sales approaches produce the widest error ranges. ATTOM’s model has 30 years of market evolution data to work with regardless of recent transaction activity.
Sources
Editorial disclosure
This article is based on a press release issued by ATTOM and has been independently rewritten and editorially expanded. It covers the launch of ATTOM’s next-generation AI-powered automated valuation model. ATTOM is a privately held company. Accuracy metrics cited, including 2.9% median absolute percentage error and 80% within 10% of sale price, are as reported by ATTOM based on internal out-of-sample testing and have not been independently verified. Market context is sourced from the Federal Housing Finance Agency. Commentary reflects the author’s own assessment. The information provided on this website is for informational and educational purposes only. Our content is derived strictly from verified online sources to ensure accuracy and objectivity. This analysis does not constitute financial, investment, or professional advice. Readers are encouraged to consult with qualified professionals before making decisions based on this information. For more information, please see our full DISCLAIMER.


