Stop Wasting Time on Ideas That Won’t Work
Use Claude to surface the hidden risks in your strategy before you commit budget, teams, or reputation.
Pressure test any idea in under 10 minutes
The Precoil EMT system is trusted by companies such as…
Benefits of using the EMT System with Claude
Uncover the riskiest assumptions inside a business idea, strategy, or initiative before committing resources.
Expose what your team is assuming but not saying
Identify the most important assumptions (risks) to test first
Turn those assumptions into simple experiments
Run fast tests before committing real resources
Run the Precoil EMT System with Claude
Use Claude to walk through the Extract → Map → Test system and uncover the riskiest assumptions inside a business idea, strategy, or initiative before committing resources.
Instead of debating opinions, the EMT skill helps you:
Surface hidden assumptions
Prioritize the biggest risks
Design small tests to learn faster
The goal isn’t to prove ideas right.
The goal is to avoid large mistakes early.
Use the Precoil EMT Skill to Structure Better Decisions
Leaders often ask:
What assumptions are we making?
Which risks matter most?
What should we test first?
The Precoil EMT Claude Skill helps structure those conversations quickly.
It guides you through the same thinking used in decision sprints and innovation programs, but in a lightweight format you can run directly in Claude.
How the EMT Skill Improves Decision Making
Instead of relying on intuition or long debates, the skill helps you:
Extract the hidden assumptions behind an idea
Separate Desirability, Viability, and Feasibility risks
Identify the most important assumptions to test first
Turn those assumptions into simple experiments
The goal isn’t certainty.
The goal is decision clarity.
Here’s what EMT looks like on an idea:
I’m considering adding an AI feature that predicts health issues for cats to our wearable cat health monitoring device. The AI feature would cost an additional $15/month and we would initially target existing customers in San Francisco, CA.
Extract
Surface the assumptions across Desirability, Viability, and Feasibility that would break the idea if wrong.
-
I believe existing customers experience enough uncertainty about their cat's health that a predictive AI feature would feel meaningfully different from the monitoring they already have.
If customers feel adequately informed, there's no felt gap for prediction to fill.
-
I believe cat owners in San Francisco are willing to act on AI-generated health predictions before visible symptoms appear.
Predictive value only materializes if users trust the output enough to change behavior.
-
I believe existing customers perceive AI-based health prediction as meaningfully more valuable than the alert-based monitoring they already use.
If the delta feels incremental, adoption motivation collapses regardless of price.
-
I believe enough existing customers will pay an additional $15/month to cover the cost of building, maintaining, and improving the AI prediction model.
If attach rate is low, the economics of sustaining the feature erode quickly.
-
I believe the San Francisco pilot cohort is large enough to produce statistically meaningful retention and conversion data before a broader rollout decision is required.
A small pilot base may yield ambiguous signals, forcing a premature go/no-go call.
-
I believe the marginal cost of delivering AI predictions per user stays low enough that $15/month produces sustainable unit economics at realistic attach rates.
If inference costs scale with usage, margin assumptions may not hold under real load.
-
I believe the sensor data currently collected is sufficient in quality, frequency, and variety to train a prediction model with clinically meaningful accuracy.
If existing data lacks resolution, the model underperforms or requires hardware changes before launch.
-
I believe the team can validate AI predictions against real health outcomes quickly enough to establish user trust before churn becomes a problem.
Without veterinary outcome data, model accuracy claims remain unverified.
-
I believe the company can navigate any regulatory or liability exposure associated with making AI-generated health predictions about animals before launch.
Health prediction for pets may attract scrutiny around accuracy claims and misdiagnosis liability.
DVF Tension
The feature's perceived value depends on predictions being accurate enough to change owner behavior, but building that accuracy requires veterinary outcome data that may take months to accumulate — meaning the user experience at launch may not yet be good enough to justify the trust it needs to earn.
Map
Place assumptions on a 2×2 of Importance vs. Evidence. The top-right quadrant — high importance, low evidence — is where risk lives.
Placement
The overall placement logic is sound — assumptions are spread across all four quadrants and show genuine differentiation between what the team knows and doesn't know. The clustering of high-importance, low-evidence assumptions in the top-right reflects good instincts about where risk lives.
DVF Coverage
All three categories are represented. Feasibility is weighted toward the left, suggesting the team feels more confident in delivery than the situation may warrant — particularly given that the AI model accuracy assumption appears underweighted on importance.
Assumption to Reconsider
"I believe we can develop an AI model that accurately predicts a range of relevant health issues" is placed bottom-center, implying low importance. Model accuracy is the core premise of the entire feature — if it fails, nothing else holds. It belongs in the top half, and further right given the lack of clinical validation data.
What's Working
The team correctly identified that user trust — "I believe cat owners will trust AI-driven predictions without significant skepticism" — sits in high-importance territory with limited evidence. This is a sophisticated placement that many teams miss or dismiss too early.
Priority — Top-Right Quadrant
Alignment
I believe the AI's predictions will align with common health concerns cat owners already have or would find valuable. If predictions surface obscure conditions rather than ones owners recognize and fear, the feature will feel irrelevant regardless of technical accuracy.
Conversion
I believe a significant percentage of our existing customer base in San Francisco will convert to this new paid AI feature. This is the revenue gateway assumption for the entire pilot. Without a conversion signal, the economics remain entirely theoretical.
Test
Design an experiment for the riskiest assumption. Define what you're trying to learn, how to run it, and what success looks like — before you start.
"I believe the AI's predictions will align with common health concerns cat owners already have or would find valuable."
What You're Trying to Learn
Whether the health conditions your AI is capable of predicting overlap sufficiently with the conditions your existing customers already worry about. If there is a mismatch between model output and owner concern, the feature will feel irrelevant even if it performs well technically.
How to Run It
- 1Recruit 8–12 existing customers from your San Francisco base. Prioritize customers who have engaged with the app recently. Prepare a discussion guide that opens with unprompted questions before introducing anything about the new feature.
- 2Conduct 30–45 minute interviews. Begin by asking customers to describe the last time they worried about their cat's health and what prompted it. Then present the conditions your AI can predict and ask them to react — without framing it as a product pitch.
- 3Map the conditions owners named unprompted against the conditions your model covers. Calculate the overlap rate. Identify high-concern conditions outside the model's scope, and any outputs that generated low recognition from owners.
How to Measure It
Metric
% of AI-predictable conditions that appear unprompted in owner concern lists, or that owners immediately rate as highly relevant when shown
Pass
70%+ of the AI's predictable conditions are named unprompted or rated highly relevant across at least 8 of 12 interviews
Fail
Fewer than half of owners recognize predicted conditions as relevant, or owners consistently name high-priority concerns outside the model's current scope
Estimated Effort
Remaining Uncertainty
This experiment confirms alignment between owner concern and model scope, but will not tell you whether owners trust AI-generated predictions enough to act on them once the feature is live.
Want a library of experiments like this? → Explore the Precoil Experiment Library
How to Choose the Right Experiment to Start With
The EMT skill can generate experiment briefs.
But most teams still struggle with:
selecting the fastest experiment
sequencing tests correctly
avoiding weak evidence
The Precoil Experiment Library solves this problem with 50+ tested experiment designs mapped to Desirability, Viability, and Feasibility risks.
Why Run EMT Inside Claude?
Most teams rely on meetings to make decisions. Claude lets you pressure test ideas instantly without scheduling another one.
The EMT skill doesn't replace judgment. It makes the risk visible before the decision is made.
Get Started with the Precoil EMT Skill
The fastest way to find out if your idea works before you invest in it.
For Product, Strategy and Innovation Leaders.
How to Use the Precoil EMT Skill
Download testing-business-ideas-with-claude.zip (not the .md file)
Open Claude and go to Settings → Capabilities
Ensure Code execution and file creation is enabled
Go to Customize → Skills → Upload a skill
Run EMT on any idea
Requires a free, Pro, Max, Team, or Enterprise Claude account.
Prompt example:
Run EMT on this idea:
[describe your product, strategy, or initiative]The Skill Surfaces the Risk. The Library Tells You How to Test It.
The EMT skill designs experiment briefs.
The Precoil Experiment Library contains over 50 tested experiment designs, each mapped to a specific assumption type across Desirability, Viability, and Feasibility, with evidence strength ratings, and sequencing guidance.