When evaluating math competitions—for coaching decisions, admissions context, or understanding the competitive landscape—it helps to have a structured framework. This article presents three lenses: integrity and reputation, competition impact, and geographic reach.
Not all competitions are equally trustworthy. Cheating—leaked exams, weak proctoring, long testing windows—has affected some contests more than others. A strong result at a low-integrity contest tells you less than the same result at a high-integrity one.
Integrity in US Math Competitions rates major contests on a 1–10 scale (1 = rampant cheating, 10 = very clean), based on community discussions (AoPS, Reddit), documented incidents, and structural analysis. Key takeaways:
| Integrity Level | Examples |
|---|---|
| 9 | USAMO/USAJMO, IMO, HMMT, MathCounts National — proof-based or tightly proctored, nearly impossible to fake |
| 8 | PUMaC, ARML, CMIMC, BMT, BAMO, DMM, CMM — in-person at university venues, strong proctoring |
| 7 | MMATHS — multi-site adds some variance |
| 4 | AIME — repeated leaks, school-proctored |
| 3 | AMC 10/12 — massive documented leaks |
When comparing competitions, start with integrity. A contest with a low rating may have inflated or unreliable results. For high-stakes decisions, prioritize consistency across multiple high-integrity contests. See Using data to identify integrity issues for how to use this database to spot red flags.
Definition: The Competition Impact Index is the percentage of a competition’s top 100 ranked students who are in the overall top 100. The metric is MCP-weighted: it measures the share of overall elite MCP captured (higher-ranked students contribute more). See MCP.
\[\text{Impact Index} = \frac{\sum \text{MCP of contest's top 100 who are in overall top 100}}{\sum \text{MCP of overall top 100}} \times 100\%\]The impact index is computed from the current database for all competitions. See the Competition Ranking page for live impact index data. The index is most meaningful for contests with 50+ ranked participants; smaller contests can show very high values because their entire field is elite.
Some competitions draw primarily from their host region; others attract students from across the country. This affects how “national” a result really is.
Definition: For each competition, we store the count of participants by state (from students.csv or contest-specific results). This lets you see where each contest’s field comes from.
The database tracks state counts per contest. See the Competition Ranking page → Attraction tab for live data: select a competition to view its student distribution by state (pie chart and US map). Compare contests to see which draw nationally vs. regionally—e.g., BAMO and BMT are heavily California; HMMT February and PUMaC Division A draw from across the country.
Community discussions on AoPS and Reddit often note that East Coast competitions (HMMT, PUMaC, ARML) and West Coast competitions (BMT, BAMO, CMM) each have strong local ecosystems. Students who excel at both coasts’ contests demonstrate broad, consistent ability.
When comparing two competitions, consider:
| Factor | Question to Ask |
|---|---|
| Integrity | Can I trust the results? |
| Impact | Does this contest identify the same elite students as the broader system? |
| Geography | Is the field local or national? |
students.csv or contest-specific results (e.g., AMO, JMO, MathCounts). Some students have missing or inferred state.For methodology details on MCP and rankings, see MCP — Math Competition Points.
Live rankings: See the Competition Ranking page for impact index and geographic data computed from the current database.
For feedback or suggestions: mathcontestintegrity@gmail.com.