ACORD Forms, Loss Runs, and Underwriting Automation: How AI Turns Insurance Submissions Into Usable Insight

Commercial underwriting does not suffer from a shortage of information. It suffers from too much fragmented information arriving in too many formats.
A single insurance submission can include ACORD forms, loss runs, schedules of values, supplemental questionnaires, spreadsheets, broker emails, scanned PDFs, zipped attachments, and one-off documents that do not fit any standard pattern. Some of the most important facts may sit in a broker email. Others may be buried in an Excel file, hidden in a PDF attachment, or implied only when compared against internal underwriting guidelines.
Before an underwriter can make a decision, someone has to turn that pile of documents into something coherent.
That is where time disappears.
In many underwriting workflows, the first several hours are not spent making a judgment about risk. They are spent doing detective work. Teams have to identify the relevant documents, extract the right details, reconcile conflicting information, compare the submission to appetite and guidelines, and summarize the business into a format that is easy to evaluate.
The work is tedious, repetitive, and expensive. Worse, it often ends with a simple conclusion that could have been reached far earlier: the risk was never a fit in the first place.
This is why AI is becoming so important in underwriting automation. The real opportunity is not to replace the underwriter. It is to convert messy submission packages into structured, reviewable, source-linked insight fast enough to improve triage, accelerate decisions, and increase underwriting capacity.
Nomad Data’s Doc Chat helps insurance teams do exactly that: read across ACORD forms, loss runs, SOVs, broker emails, and underwriting guidelines to surface the facts, gaps, contradictions, and red flags underwriters need to review submissions faster.
As Brad Schneider, CEO of Nomad Data, puts it:
“The value of AI in underwriting is not about removing expert judgment. It is about getting underwriters to the point of judgment faster, with the right evidence already organized and linked back to the source.”
For insurers, that distinction matters. The goal is not to make underwriting less human. The goal is to make underwriting less manual.
Why Insurance Submission Data Is So Hard to Use
Insurance submission data is difficult not because underwriters lack expertise, but because the information they need is spread across disconnected sources.
A submission might include a clean ACORD form, but that does not mean the entire file is clean. The application may say one thing, the broker email may add nuance, the loss runs may tell a different story, and the SOV may reveal exposures that were not obvious in the business description.
Underwriters are not just reading documents. They are assembling a complete picture of risk from incomplete, inconsistent, and often messy materials.
Loss Runs
Loss runs are a good example. They may span multiple years, come from different carriers, use inconsistent terminology, and vary widely in formatting. Important details such as claim status, severity, reserve movement, large losses, recurring claim patterns, and open claims may be difficult to compare at a glance.
An underwriter may need to determine whether losses are improving, deteriorating, or misunderstood. They may need to distinguish a one-time severity event from a recurring operational issue. They may also need to identify missing years, unexplained gaps, or claims that require more context.
That is difficult when loss runs arrive as scanned PDFs, carrier exports, spreadsheets, or documents with inconsistent column names and claim descriptions.
Schedules of Values
Schedules of values create a different kind of problem. SOVs often contain essential property-level details such as location, occupancy, construction type, valuations, replacement cost, square footage, and protection information.
But the data is rarely standardized across submissions. One SOV may be a well-structured spreadsheet. Another may be a PDF. Another may include merged cells, missing fields, inconsistent labels, or location details that require manual cleanup before the information can be evaluated.
For property underwriters, an SOV is not just a spreadsheet. It is often the operational map of the risk. If the SOV is hard to interpret, the entire account becomes harder to evaluate.
ACORD Forms
ACORD forms help standardize part of the insurance submission process, but they do not eliminate the need for judgment.
The forms still need to be reviewed for completeness, validated against other materials, and checked for omissions, contradictions, or outdated information. Standardization at the form level does not solve inconsistency across the full submission package.
An ACORD form may provide foundational information such as named insured, requested coverage, limits, deductibles, policy dates, business description, and prior carrier information. But those fields still need to be interpreted in context.
For example, the business description on an ACORD form may appear straightforward until the supplemental questionnaire or broker email reveals an exposure that changes the underwriting decision.
Broker Emails and Supplemental Materials
Broker emails and supplemental documents often contain the context that makes the rest of the file make sense.
A broker may explain a prior loss, clarify a business operation, note a special exposure, or flag a timing issue that never appears in any structured field. In manual workflows, that context is easy to miss. In fast-moving underwriting environments, it is even easier for it to remain disconnected from the formal review.
The problem expands further once the underwriter leaves the submission itself. To evaluate risk, they may also need to consult external information, state-specific considerations, and internal underwriting guidelines scattered across multiple documents.
So the job is not just reading the submission. It is reading the submission in context.
That is why underwriting review often starts as a data-structuring exercise. Before expertise can be applied, the evidence has to be organized.
What Underwriters Need to Extract From Loss Runs, SOVs, and ACORD Forms
What underwriters need to know varies by line of business, but the core objective is always the same: understand the risk as quickly and clearly as possible.
From loss runs, underwriters are usually trying to identify:
- Frequency and severity of prior claims
- Open versus closed claims
- Large losses
- Reserve movement
- Claims by location, coverage, or cause
- Recurring patterns that may signal operational or control issues
- Missing years or incomplete loss history
- Whether loss history is improving, stable, or worsening
They also want to separate one-off events from signals that could affect future profitability.
From SOVs, underwriters need property schedules, locations, values, occupancy details, construction characteristics, and other exposure information that shapes how the account should be priced and evaluated.
From ACORD forms, they need named insured information, coverages requested, limits, deductibles, policy dates, business descriptions, prior carrier information, and other foundational details. These forms often provide the baseline facts the rest of the submission will either reinforce or contradict.
Across all of these sources, underwriters are also searching for what is missing.
Are the documents complete? Do they agree with one another? Is key information outdated? Is there a discrepancy between the broker’s narrative and the formal application? Has something material been disclosed only indirectly?
This is where the cost of manual review becomes obvious. A team might spend three or six hours organizing the file, only to discover a single sentence revealing an exposure that the carrier will not underwrite under any circumstances.
All of that work is effectively wasted.
AI changes that equation. It helps insurers get to “no” faster when the risk clearly falls outside appetite. It also helps them get to “tell me more” faster when the account may be viable but requires closer judgment.
Brad Schneider explains the opportunity this way:
“In underwriting, speed matters most when it is paired with evidence. If AI can show the underwriter what matters, why it matters, and exactly where it came from, it becomes much more than a summary tool. It becomes a way to scale better decision-making.”
That is the real promise of underwriting automation. Not faster guesses. Faster access to the facts that matter.
How AI Helps Convert Insurance Submissions Into Usable Insight
The most practical role for AI in underwriting is not abstract prediction. It is document understanding.
A good AI-driven underwriting workflow can read across PDFs, scans, spreadsheets, forms, emails, attachments, and zipped submission packages. Instead of forcing each document into a narrow template, it can treat the full submission as a connected body of evidence.
That allows AI to extract key fields from ACORD forms and supplemental documents, summarize loss runs into meaningful claim patterns, pull property details from SOVs, and surface the business facts underwriters care about most.
It can also compare those facts against underwriting guidelines, flag missing or conflicting information, and identify unusual exposures that deserve escalation.
Just as important, it can link findings back to the source.
In high-stakes insurance workflows, a summary without traceability is not enough. Underwriters need to know where a conclusion came from, what document supports it, and how quickly they can verify it.
The most effective systems do not stop at generic summarization. They produce outputs that match the underwriting team’s actual workflow.
In some environments, that means triage and prioritization. In others, it means structured field extraction into a spreadsheet. In others, it means a grid comparing submission facts against appetite guidelines and highlighting what has been triggered, what is acceptable, and what falls into a gray area.
That flexibility matters because underwriting is not one workflow.
A team evaluating racehorse coverage is looking for different signals than a team reviewing a law firm, a chain of butcher shops, a nonprofit, a construction contractor, or an auto body operation. The relevant exposures, required details, and decision thresholds change with the risk.
AI becomes most useful when it is configured around what each underwriting group actually cares about.
At Nomad Data, that is how we think about underwriting automation. The goal is not to impose a standardized output on every insurer. The goal is to configure Doc Chat so that each underwriting group receives the structure, triage logic, and analysis that fit its business.
Example Workflow: From Broker Submission to Underwriter Review
A simple underwriting automation workflow often looks like this.
A broker submits a package containing ACORD forms, loss runs, an SOV, emails, and supplemental documents. Those materials may include spreadsheets, PDFs, scanned forms, and miscellaneous attachments that do not follow a single format.
AI ingests the entire package, including the broker emails and attachments that often contain important context. It organizes the submission as a unified evidence set rather than a loose collection of files.
It then extracts key details from across the documents. That may include insured information, requested coverage, property-level exposures, claims history, operational descriptions, and any other fields the underwriting team has defined as important.
Next, the system summarizes the submission into a reviewable format. It highlights the major characteristics of the business, the relevant exposures, the claims patterns, and the points that may affect appetite.
It compares the submission against underwriting guidelines and identifies where the account clearly fits, clearly fails, or sits in an area that requires judgment. If there are missing documents, inconsistent answers, or red-flag exposures, those are surfaced early.
Finally, the underwriter receives a structured output: summary, triage recommendation, extracted fields if needed, and source-linked citations that allow fast verification.
In high-volume environments, this is especially valuable. Underwriters want to know which submissions deserve full attention, which should be skipped immediately, and which require escalation.
AI helps create that order.
Where Underwriting Automation Adds the Most Value
The clearest value often starts with submission triage.
Many underwriting teams are not limited by opportunity. They are limited by how many opportunities they can realistically review. If every submission requires hours of manual orientation before appetite can even be assessed, capacity disappears quickly.
AI helps insurers process more opportunities by identifying likely non-starters faster and moving viable submissions into a more structured review path.
1. Submission Triage
Submission triage is one of the strongest use cases for underwriting automation because it targets the beginning of the workflow, where time is often lost.
Instead of requiring an underwriter to manually review every document before determining whether the account is worth deeper analysis, AI can create a first-pass view of the submission.
It can answer questions such as:
- Does this account fit appetite?
- Are there obvious disqualifying exposures?
- Are required documents missing?
- Is the loss history acceptable?
- Are there details that require escalation?
- Is the submission complete enough to review?
This helps underwriting teams focus attention where it matters most.
2. Appetite Checks
Appetite checks are another high-value use case.
Instead of requiring an underwriter to manually read the file before discovering an obvious mismatch, AI can compare the submission against known guidelines early in the process and surface disqualifying exposures immediately.
This is especially useful when guidelines are nuanced, spread across multiple documents, or specific to industry, geography, coverage type, or risk profile.
AI can help determine whether the account clearly fits, clearly does not fit, or requires additional judgment.
3. Loss Runs Review
Loss history review is one of the most time-consuming parts of underwriting.
AI can summarize prior claims, detect recurring patterns, identify large losses, and help bring consistency to the review of loss runs.
For example, it can help highlight whether a submission has recurring slip-and-fall claims, repeated auto incidents, multiple claims tied to a specific location, or a severity trend that deserves closer review.
It can also help identify missing loss years or inconsistencies between the broker narrative and the provided loss history.
4. SOV Analysis
SOV analysis benefits in a similar way.
Property-level details can be extracted, organized, and compared across locations and values without requiring a human to manually normalize every spreadsheet.
AI can help identify missing fields, unusual values, concentration issues, or locations that require further investigation. For underwriting teams that handle large schedules, this can meaningfully reduce manual review time.
5. ACORD Forms Review
Because ACORD forms are a standard part of many insurance submissions, they are often a natural starting point for automation.
AI can extract key fields, compare them to other documents, and flag inconsistencies. For example, it can compare named insured information, requested coverages, limits, business descriptions, and policy dates against the broker email, supplemental questionnaire, or loss runs.
This helps underwriters quickly see whether the ACORD form is complete and whether it aligns with the rest of the submission.
6. Missing Information Detection
Coverage comparison and missing information detection become easier when the system can read across the full evidence set rather than rely on a single form.
The underwriter is no longer left to reconcile every discrepancy by hand. Instead, AI can surface gaps and inconsistencies early, making broker follow-up faster and more precise.
7. Referral and Escalation Support
Finally, AI can support referrals and escalation by identifying gray areas instead of pretending every answer is binary.
That is where real underwriting judgment still matters most.
AI can help route the right submissions to the right people, summarize why escalation may be needed, and provide the evidence required for a faster review.
Why AI Should Support, Not Replace, Underwriters
The best underwriting AI does not remove the human from the process. It removes unnecessary manual assembly work.
Underwriters are still responsible for judgment. They decide whether a gray-area risk is acceptable, whether a pattern is truly concerning, whether more information is needed, and whether an exception is worth making.
Those are not clerical decisions. They are risk decisions.
AI is most useful when it handles the repetitive work that comes before judgment: reading, organizing, summarizing, comparing, and flagging. That gives underwriters more time to focus on what they were actually hired to do: understand risk.
Trust is central here.
In practice, underwriting teams do not adopt AI because someone tells them to trust it. They adopt it by running it side by side with their existing workflow, comparing outputs, checking accuracy, and giving feedback.
Over time, confidence grows as the system proves that it can perform reliably and as configurations are refined to reflect how the team really works.
That is why citation, auditability, and configurability matter so much.
If an AI system cannot show where its conclusions came from, or if it cannot adapt to the underwriting group’s logic, it will remain a demo rather than a trusted workflow tool.
As Brad Schneider says:
“Insurance teams do not need another generic AI demo. They need systems that can handle real documents, real exceptions, and real underwriting logic. That means citations, auditability, and the ability to configure outputs around how each team actually works.”
That is especially true in commercial underwriting, where the difference between a good decision and a bad one can depend on a single clause, claim detail, location, or exposure buried deep in the file.
What to Look for in an AI Underwriting Automation Solution
Not every AI tool is built for insurance document workflows. When evaluating underwriting automation solutions, insurers should look beyond generic summarization and focus on the capabilities required for real submission review.
Source-Linked Answers
Every summary, recommendation, or extracted field should be traceable back to the underlying document. Underwriters need to verify the evidence quickly, especially when the output influences appetite, pricing, escalation, or broker follow-up.
Cross-Document Reasoning
The system should be able to read across ACORD forms, loss runs, SOVs, broker emails, supplemental questionnaires, and underwriting guidelines together. The value comes from connecting facts across the submission, not summarizing each document in isolation.
Flexible Output Formats
Different underwriting teams need different outputs. Some need a triage recommendation. Others need extracted fields. Others need a comparison against appetite guidelines. A strong solution should adapt to the workflow rather than force every team into the same template.
Ability to Handle Messy Documents
Insurance submissions are rarely clean. The system should be able to process scanned documents, PDFs, spreadsheets, inconsistent layouts, and long document packages without requiring constant template maintenance.
Configurable Business Logic
Underwriting automation should reflect the insurer’s appetite, guidelines, escalation rules, and preferred review structure. Generic AI is not enough for high-stakes insurance workflows.
Auditability
Insurers need to understand how outputs were generated, what sources were used, and how the system performed over time. This is critical for trust, compliance, and continuous improvement.
Turning Messy Insurance Documents Into Faster Decisions
Loss runs, SOVs, ACORD forms, supplemental documents, and broker emails all contain the information underwriters need. The problem is that the information rarely arrives in a form that is easy to use.
For many insurers, underwriting delays are not caused by lack of skill. They are caused by the operational burden of turning fragmented submission packages into something coherent enough to evaluate.
That burden consumes time, limits capacity, and forces highly trained professionals to spend too much of their day on repetitive document work.
AI changes that by converting messy insurance submissions into structured, reviewable, source-backed insight. It helps teams reject poor-fit risks faster, understand viable risks more quickly, and bring consistency to the earliest stages of underwriting.
Most importantly, it does all of this while keeping expert judgment where it belongs: with the underwriter.
At Nomad Data, we see underwriting automation as a practical way to make expert teams more effective. When AI is configured around the realities of each underwriting group, it does not flatten decision-making. It sharpens it.
And that is what turns document chaos into usable insight.
Want to see how this works on real insurance submissions? Find out how Doc Chat helps underwriting teams turn ACORD forms, loss runs, SOVs, broker emails, and other submission documents into structured, source-linked insight.
FAQs
ACORD forms are standardized insurance forms used to collect and share key information during the insurance application and underwriting process. They often include details such as named insured, coverages requested, limits, deductibles, policy dates, business descriptions, and prior carrier information. While ACORD forms help standardize part of the submission, underwriters still need to validate the information against loss runs, SOVs, broker emails, and supplemental documents.
Loss runs are reports that show a policyholder’s claims history over a given period. In commercial underwriting, loss runs help underwriters evaluate prior claim frequency, severity, open claims, closed claims, large losses, and recurring claim patterns. They are a critical part of risk evaluation, but they can be difficult to review manually when formats vary across carriers or years.
Underwriting automation helps insurance teams reduce manual document review by extracting, summarizing, organizing, and comparing information from submission materials. This can include ACORD forms, loss runs, schedules of values, broker emails, supplemental questionnaires, and internal underwriting guidelines. The goal is not to replace underwriters, but to help them reach informed decisions faster.
Yes. AI can be used to review ACORD forms, loss runs, SOVs, and other submission documents together as one connected evidence set. This is valuable because key underwriting facts are often spread across multiple documents. A business description in an ACORD form may need to be validated against a broker email, while claims history in loss runs may need to be compared against appetite guidelines.
Source citations are important because underwriters need to verify AI-generated outputs. A summary or recommendation is only useful if the underwriter can quickly see which document, page, table, email, or field supports the conclusion. In high-stakes insurance workflows, traceability helps build trust and supports more reliable review.
Nomad Data’s Doc Chat helps insurers turn messy submission packages into structured, reviewable, source-linked insight. It can read across ACORD forms, loss runs, SOVs, broker emails, supplemental questionnaires, and internal guidelines to help underwriting teams triage submissions, extract key fields, identify missing or conflicting information, and compare accounts against appetite. Doc Chat is designed to support expert underwriters by reducing manual document work and making the evidence easier to review.
No. The strongest use case for underwriting automation is supporting underwriters, not replacing them. AI can handle repetitive document work such as reading, extracting, summarizing, comparing, and flagging information. Human underwriters remain responsible for judgment, exceptions, pricing decisions, and final risk evaluation.
