From 602e7a4115c59d131bb0a926025310e871afe8f9 Mon Sep 17 00:00:00 2001 From: zlei9 Date: Sun, 29 Mar 2026 09:50:15 +0800 Subject: [PATCH] Initial commit with translated description --- SKILL.md | 173 +++++++++++++++++++++++++++++++++++++++++ _meta.json | 6 ++ competitor-analysis.md | 89 +++++++++++++++++++++ evidence-grading.md | 30 +++++++ validation.md | 111 ++++++++++++++++++++++++++ 5 files changed, 409 insertions(+) create mode 100644 SKILL.md create mode 100644 _meta.json create mode 100644 competitor-analysis.md create mode 100644 evidence-grading.md create mode 100644 validation.md diff --git a/SKILL.md b/SKILL.md new file mode 100644 index 0000000..99441d7 --- /dev/null +++ b/SKILL.md @@ -0,0 +1,173 @@ +--- +name: Market Research +slug: market-research +version: 1.0.1 +homepage: https://clawic.com/skills/market-research +description: "市场研究。" +changelog: "Expanded the guidance and clarified when this skill should activate." +metadata: {"clawdbot":{"emoji":"📊","requires":{"bins":[]},"os":["linux","darwin","win32"]}} +--- + +## When to Use + +Use this skill when the user needs market evidence, not just opinions. It should activate for market sizing, opportunity validation, competitor landscape work, segment selection, pricing research, whitespace mapping, and expansion decisions. + +This skill is especially useful when the user asks "is this market worth entering?", "how big is the real opportunity?", "who else is already winning here?", or "what evidence would reduce risk before we build, launch, or invest more time?" + +## Quick Reference + +Use the smallest relevant file for the task. + +| Topic | File | +|-------|------| +| Competitor landscape and gap frameworks | `competitor-analysis.md` | +| Customer validation and pricing methods | `validation.md` | +| Evidence quality and confidence rubric | `evidence-grading.md` | + +## Research Brief + +Start every serious engagement with a compact brief like this: + +```text +MARKET RESEARCH BRIEF +Decision: +Target customer: +Geography: +Category or substitute set: +Time horizon: +Must-answer questions: +Evidence bar: +``` + +If the brief is weak, the research will drift. Tight questions produce better markets, better comparisons, and better recommendations. + +## Research Modes + +Pick the lightest mode that still answers the decision well. Depth should follow the decision, not ego. + +| Mode | Best For | Minimum Output | +|------|----------|----------------| +| **Quick scan** | Early idea filtering | Market snapshot, top competitors, 2-3 key risks | +| **Decision memo** | Founders, operators, or investors making a next-step call | Sizing view, segment map, competitor comparison, recommendation | +| **Launch validation** | New product, feature, or niche entry | Demand signals, pricing checks, interview findings, no-go risks | +| **Expansion study** | New geography, segment, or adjacent category | SAM filters, local competitors, channel constraints, rollout logic | + +## Core Rules + +### 1. Define the Decision Before Research Starts + +Always anchor the work to one decision: +- enter or avoid a market +- prioritize one segment over another +- shape positioning and pricing +- validate whether to build, launch, or expand + +Research without a decision target becomes a document full of facts and no leverage. + +### 2. Size the Market in Layers, Not in Headlines + +Never stop at a single big number. Separate: + +| Layer | Question | Failure Mode | +|-------|----------|--------------| +| **TAM** | How large is the broad category? | Sounds exciting but too abstract | +| **SAM** | Which part is actually reachable for this product and customer? | Overstates opportunity | +| **SOM** | What can realistically be won in a specific window? | Turns fantasy into planning | + +Whenever possible, show the formula, assumptions, and confidence level. A smaller defensible number is better than a huge vague one. + +### 3. Triangulate Evidence and Grade Source Quality + +Use at least three evidence families before making a strong claim: +- market structure data: census, filings, association reports, public benchmarks +- behavior data: search trends, reviews, job posts, product usage proxies +- direct customer evidence: interviews, surveys, waitlists, prepayments, LOIs + +See `evidence-grading.md` for the confidence ladder. If all evidence comes from one source type, the conclusion is still fragile. + +### 4. Segment Before You Generalize + +Do not treat "the market" as one blob. Split by: +- customer type +- company size +- geography +- urgency of problem +- willingness to pay +- existing alternatives + +Many bad conclusions come from averaging together segments that behave very differently. + +### 5. Map Competition Around Customer Choice, Not Only Brand Names + +Competitor analysis includes: +- direct competitors +- indirect substitutes +- internal workarounds such as spreadsheets, agencies, or manual processes +- future entrants with clear adjacency + +Use `competitor-analysis.md` to build a positioning map, review-mining matrix, and whitespace view. The real competitor is whatever the customer would choose instead of the proposed offer. + +### 6. Favor Revealed Demand Over Stated Enthusiasm + +Use interviews and surveys to learn language and patterns, but trust behavior more than compliments. + +Strong signals: +- repeated painful workarounds +- urgent problem frequency +- customers introducing others with the same pain +- willingness to pay, pilot, pre-order, or switch + +Weak signals: +- "great idea" +- generic survey positivity +- likes, followers, or broad curiosity with no concrete action + +See `validation.md` for interview, survey, and pricing research structures. + +### 7. Finish with a Decision-Ready Recommendation + +Every deliverable should end with: + +```text +RECOMMENDATION +- What the evidence supports +- What remains uncertain +- What should happen next +- What would change the recommendation +``` + +Good market research reduces uncertainty. Great market research makes the next move obvious. + +## Common Traps + +- **Top-down theater** -> Huge category numbers create false confidence and weak planning. +- **Competitor tunnel vision** -> Looking only at visible brands misses substitutes and status-quo behavior. +- **Segment blur** -> Mixing SMB, enterprise, prosumer, and consumer demand corrupts the conclusion. +- **Source recency failure** -> Old pricing pages and stale reports make current decisions look safer than they are. +- **Opinion inflation** -> Survey excitement without action gets mistaken for demand. +- **No confidence labeling** -> Strong and weak evidence get presented with the same weight. +- **Research with no recommendation** -> User gets a report but no practical decision path. + +## Security & Privacy + +This skill does NOT: +- make hidden outbound requests +- fabricate customer signals or fake interviews +- access private competitor systems +- create persistent memory or maintain a local workspace by default +- store secrets unless the user explicitly asks for that workflow + +Live web research is appropriate only when the task requires current market data or the user asks for external evidence. + +## Related Skills +Install with `clawhub install ` if user confirms: +- `pricing` - Convert validation findings into pricing strategy and willingness-to-pay decisions. +- `seo` - Translate validated demand into search-driven positioning and content opportunities. +- `business` - Connect market findings to strategic choices and operating tradeoffs. +- `compare` - Structure side-by-side option analysis when multiple markets or segments compete. +- `data-analysis` - Turn collected numbers into cleaner interpretation and supporting visuals. + +## Feedback + +- If useful: `clawhub star market-research` +- Stay updated: `clawhub sync` diff --git a/_meta.json b/_meta.json new file mode 100644 index 0000000..4f6ca55 --- /dev/null +++ b/_meta.json @@ -0,0 +1,6 @@ +{ + "ownerId": "kn73vp5rarc3b14rc7wjcw8f8580t5d1", + "slug": "market-research", + "version": "1.0.1", + "publishedAt": 1773253981245 +} \ No newline at end of file diff --git a/competitor-analysis.md b/competitor-analysis.md new file mode 100644 index 0000000..e862c69 --- /dev/null +++ b/competitor-analysis.md @@ -0,0 +1,89 @@ +# Competitor Analysis Frameworks + +## Landscape Mapping + +### Direct vs Indirect Competitors + +| Type | Definition | Example (CRM) | +|------|------------|---------------| +| **Direct** | Same solution, same customer | Salesforce, HubSpot | +| **Indirect** | Different solution, same problem | Spreadsheets, email | +| **Potential** | Could enter your space | Microsoft, Google | + +### Positioning Matrix + +Map competitors on 2 axes relevant to your market: +- Price vs Features +- Enterprise vs SMB +- Vertical vs Horizontal +- Simple vs Complex + +Find the white space — where are customers underserved? + +## Review Mining + +### Where to Look +- **B2B SaaS:** G2, Capterra, TrustRadius +- **Consumer Apps:** App Store, Play Store +- **Physical Products:** Amazon, specialized forums +- **Services:** Yelp, Google Reviews, industry forums + +### What to Extract +1. **Recurring complaints** — Same issue mentioned 10+ times = real problem +2. **Feature requests** — What's missing from existing solutions? +3. **Use case patterns** — How do different segments use the product? +4. **Switching triggers** — Why did they leave their previous solution? + +### Template: Complaint Frequency Matrix + +| Complaint | Competitor A | Competitor B | Competitor C | +|-----------|-------------|-------------|-------------| +| Slow support | 47 mentions | 12 mentions | 89 mentions | +| Confusing UI | 31 mentions | 56 mentions | 8 mentions | +| Too expensive | 22 mentions | 3 mentions | 15 mentions | + +## Competitive Intelligence Sources + +### Public Information +- **Financials:** 10-K, 10-Q (public companies), Crunchbase funding +- **Strategy signals:** Job postings, press releases, conference talks +- **Product changes:** Changelog, Product Hunt launches, app updates + +### Ethical Intelligence Gathering +✅ Public filings and press releases +✅ Published pricing pages +✅ User reviews and forums +✅ Conference presentations +✅ Open source contributions + +❌ Fake customer inquiries +❌ Social engineering employees +❌ Scraping behind paywalls +❌ Accessing internal documents + +## Feature Comparison Matrix + +### Template + +| Feature | Your Product | Competitor A | Competitor B | Priority | +|---------|-------------|-------------|-------------|----------| +| Core feature 1 | ✅ | ✅ | ❌ | Must-have | +| Core feature 2 | 🔄 Building | ✅ | ✅ | Must-have | +| Advanced feature | ❌ | ✅ | ❌ | Nice-to-have | +| Unique differentiator | ✅ | ❌ | ❌ | Differentiator | + +### How to Prioritize +1. **Must-have:** Without this, customers won't consider you +2. **Differentiator:** This is why they'll choose you over alternatives +3. **Nice-to-have:** Adds value but doesn't drive purchase decisions + +## Market Share Estimation + +When no public data exists: + +1. **Traffic-based:** SimilarWeb, Alexa (if available) +2. **Employee-based:** Revenue per employee benchmarks × headcount +3. **Funding-based:** Typical revenue multiple for stage × funding raised +4. **App download-based:** Download counts → conversion assumptions → paying users + +**Always caveat estimates** with methodology and confidence level. diff --git a/evidence-grading.md b/evidence-grading.md new file mode 100644 index 0000000..3f92ed8 --- /dev/null +++ b/evidence-grading.md @@ -0,0 +1,30 @@ +# Evidence Grading — Market Research + +Use this ladder to prevent weak research from sounding stronger than it is. + +## Confidence Ladder + +| Confidence | Typical Evidence | How to Use It | +|------------|------------------|---------------| +| **High** | Public filings, first-party pricing pages, direct customer behavior, signed pilots, prepayments | Strong enough to support a recommendation | +| **Medium** | Review mining, job posts, trend data, credible analyst reports, repeated interviews | Useful for directional judgment with caveats | +| **Low** | Founder claims, press releases, one-off anecdotes, generic social chatter | Use only as a lead, never as the core conclusion | + +## Minimum Standard by Decision + +| Decision Type | Minimum Bar | +|---------------|-------------| +| Kill or continue an idea | Medium confidence from multiple source families | +| Pricing recommendation | Medium-to-high confidence plus direct customer evidence | +| Market entry recommendation | High confidence on demand, competition, and reachability | +| Expansion timing | High confidence on segment fit and local constraints | + +## Output Rule + +Every major conclusion should include: +- the claim +- the supporting evidence +- the confidence level +- what would invalidate it + +If you cannot state all four clearly, the conclusion is not ready. diff --git a/validation.md b/validation.md new file mode 100644 index 0000000..d0d203d --- /dev/null +++ b/validation.md @@ -0,0 +1,111 @@ +# Customer Validation + +## Pre-Building Validation + +### The Mom Test Questions + +Avoid leading questions. Get facts about past behavior, not future intentions. + +❌ "Would you use an app that does X?" +✅ "How do you currently solve X?" +✅ "What happened last time you faced this problem?" +✅ "How much time/money did that cost you?" + +### Jobs-to-be-Done Framework + +Structure: "When [situation], I want to [motivation], so I can [outcome]." + +Example: "When I'm preparing for a board meeting, I want to quickly assess competitor moves, so I can confidently answer questions about market positioning." + +### Finding Interview Subjects + +**Cold outreach:** +- LinkedIn (filter by role + industry + company size) +- Twitter/X (search for people complaining about the problem) +- Industry Slack/Discord communities +- Relevant subreddits + +**Warm introductions:** +- Ask existing network for intros +- Offer value exchange (share research findings) + +**Target:** 20-30 conversations before any confidence in patterns + +## Validation Signals + +### Strong Signals (Worth Building) + +| Signal | Weight | +|--------|--------| +| Customer gives you money (prepayment, LOI) | ⭐⭐⭐⭐⭐ | +| Customer spends significant time helping you | ⭐⭐⭐⭐ | +| Customer introduces you to others with same problem | ⭐⭐⭐⭐ | +| Customer describes workarounds they've built | ⭐⭐⭐ | +| Customer articulates the problem in your words | ⭐⭐⭐ | + +### Weak Signals (Keep Digging) + +| Signal | Reality | +|--------|---------| +| "I'd definitely use that" | Polite enthusiasm, not commitment | +| "Great idea!" | Compliment, not validation | +| Survey says 80% interested | Stated preference ≠ revealed preference | +| Lots of social media engagement | Attention ≠ willingness to pay | + +## Survey Design + +### Question Types + +**Screening questions:** Filter to your target audience +**Behavioral questions:** What have they done (past tense) +**Preference questions:** What would they choose (less reliable) +**Open-ended questions:** Capture language and unexpected insights + +### Sample Size + +| Confidence Level | Margin of Error | Required Sample | +|-----------------|-----------------|-----------------| +| 95% | ±5% | ~400 responses | +| 95% | ±10% | ~100 responses | +| 90% | ±10% | ~70 responses | + +For early validation, directional insights from 50-100 responses are often sufficient. + +### Common Survey Mistakes + +- Leading questions +- Double-barreled questions (asking two things at once) +- Social desirability bias (people say what sounds good) +- Too many questions (fatigue lowers quality) +- No screening for irrelevant respondents + +## Pricing Research + +### Van Westendorp Method + +Ask 4 questions: +1. At what price would this be **too expensive** to consider? +2. At what price would this be **expensive but worth considering**? +3. At what price would this be a **good deal**? +4. At what price would this be **so cheap you'd question quality**? + +Plot results to find optimal price range. + +### Willingness-to-Pay Interview + +"If this product existed today and solved [specific problem], what would you pay for it?" + +Follow up: "What would make it worth 2x that price?" + +### Competitive Pricing Analysis + +| Competitor | Pricing Model | Entry Price | Mid-tier | Enterprise | +|------------|--------------|-------------|----------|------------| +| Competitor A | Per seat | $15/mo | $45/mo | Custom | +| Competitor B | Usage-based | $0.01/call | $0.008/call | Volume discounts | +| Competitor C | Flat rate | $99/mo | $299/mo | $999/mo | + +Position your pricing based on: +- Value delivered vs alternatives +- Target customer segment +- Competitive reference points