How I lied to myself for 10 months—and why I finally pivoted.
During YC, the group partners’ constant refrain about remaining intellectually honest and prioritizing customers needs seemed almost redundant. Yet I fell into the very trap they had repeatedly warned against—a humbling reminder that even the clearest advice can’t immunize us against our own blind spots.
The decision to pivot emerges not from defeat, but from clarity. For those deep in a similar journey, you’ll recognize the signs: customer conversations that lead nowhere, the elaborate rationalizations for why your solution just needs more time, more features, more refinement. We tell ourselves stories about being “too early” or “ahead of the market,” when really, we’re avoiding a simpler truth.
If you find yourself at this crossroads, you might feel the weight of sunk costs—time invested, relationships built, promises made, pilots run. But I want to share my journey through this realization, the signals I initially dismissed, and the thinking that ultimately led to this decision. Perhaps in my story, you’ll find echoes of your own experience, and possibly, the permission you’ve been seeking to make a necessary change.
Why I was convinced of AI-generated image detection in the first place
To understand the motivation behind this pivot, it’s important to reflect on why I initially believed AI-generated image detection was a compelling idea. The inspiration stemmed from my experience at GitHub, where I observed firsthand that AI-generated accounts created using both synthetic text and image content were beyond the reach of existing detection tools. Traditional bot detection and spam filters simply weren’t sufficient, and with the surge of AI-generated content on the horizon, this was clearly an imminent challenge.
Of course, covering all modalities—text, image, video, audio, code—was too broad and infeasible. Upon reviewing the literature, it became clear that AI-generated text detection was futile, but images that were imperceptibly artificial to the human eye still contained machine-detectable signals, making that the right place to start.
Mistakes
Having observed the rising wave of AI-generated abuse and the ability of synthetic images to evade existing safeguards made this problem feel urgent and worthy of solving. I was eager to build and naively fell into a classic engineering-minded trap: I let the solution overshadow the need to verify genuine market demand.
Five interconnected mistakes cemented the disconnect between our product and its true value:
- Insufficient customer discovery. First, my conversations with potential customers were too shallow and sporadic.
- Undermining low demand. This insufficient discovery led to the second misstep: when they showed little interest in paying, I rationalized it as being ahead of the market rather than confronting the harder truth–that I was building a solution in search of a problem. In this way, I justified our product as being “too early” rather than acknowledging the stark reality of low demand. Even though my former team at GitHub was my original inspiration, they were never a customer. I convinced myself that this was fine—that enterprise customers just weren’t the right fit for our stage or the maturity of our product. In hindsight, it wasn’t just a question of timing; it was a signal I chose to ignore… when I wasn’t convincing myself we were “too early”, I waived this lack of appetite away as being due to product immaturity rather than what it was: low demand.
- Focus on existing solutions. Then, instead of heeding these warning signs, I made a third critical error: I dove into competitive analysis, convincing myself that existing solutions were insufficiently nuanced. This affirmed what I observed at GitHub, which was that existing solutions were quickly becoming obsolete. I became fixated on building a more “sophisticated” product with explainability and context-awareness, essentially building a solution that no one had asked for. In wanting to pursue a technically correct, comprehensive feature set, I strayed even further from market reality, building an increasingly refined answer to an unvalidated question.
- Slow feedback loop. The fourth mistake compounded all the others: because we were so early in solving what many felt was an unsolvable problem, there was inherent skepticism toward our technology, and people wanted to see the product in action before buying. While most early-stage startups can rapidly test and validate through quick pilots, because our product was so technically complex and computationally expensive to build, it made iteration painfully slow. Each learning cycle that should have taken days stretched into weeks or months, further delaying our recognition of fundamental market misalignment.
- Raising a $2.7M seed Raising money around demo day required telling a story I earnestly believed in with every cell in my body. We were offered twice as much money as we ended up taking. Our fundraising success gave me confidence in this idea and this direction, masking deeper market realities and ultimately providing a very misleading validation of our hypothesis. Worse yet, we started feeling pressure from our investors to deliver on the original idea, rather than trusting they’d be supportive in the team.
We had inadvertently created a perfect storm: building an increasingly sophisticated solution to an unvalidated problem, while our ability to course-correct was hampered by the very nature of our product.
Reasons to change course
Here are reasons I lost conviction in the problem of AI-generated image detection:
- No one really cares if images are AI-generated. AI-generated images are not inherently the problem. The actual problem is whether something is real or fake. We recognized early on that prospects cared about a broader solution verifying image authenticity rather than the specific source of manipulation, so we broadened our scope to detecting overall image tampering, but even that was too narrow, since most contexts don’t require looking at a single image as the source of credibility alone.
- Lack of industry appetite. We got into pilots with Washington Post, Reuters Fact Check, and Snopes, but did not see any significant commitment to pay for just AI image detection. And while there was some indication that folks wanted to pay for a larger problem of image tampering generally (with AI detection bundled as an advanced feature), we weren’t able to roll out a pilot to confirm this appetite because of how much time it takes to build a solution that can reliably detect evidence of post-processing or Photoshopping. We spent so much time convincing ourselves that insurance companies processing photos for claims would have this need, that it would spring up any day now because of how “obvious” it was and how realistic diffusion outputs were becoming—we weren’t honest with ourselves as hundreds of outbound messages to insurance companies went unread.
- Issues with specific verticals: We saw mild appetite from four verticals: (1) news and media orgs, (2) fraud/trust and safety, (3) government, and (4) tech companies. Across these industries, high price points for vendors are available, so I believed that theoretically I could get folks to sign the target ACV. However, there were significant adoption challenges across each of these industries…
News and media
- Slow adoption unless pressured: Media companies tend to adopt new technologies only when there is external pressure (e.g., societal shifts). They are generally slow-moving and resistant to change unless there’s a compelling need.
- Content sources and limited verification needs: Approximately 70% of media content is sourced from established channels like AP and Reuters. This reliance on a few major sources means there’s less demand for new content verification solutions, as their primary need is distribution rather than verification.
- Disconnect between buyers and users: In media companies, there are two sides of the house: budget-holders (business teams) and end-users (editorial teams) have different priorities. Editorial teams focus on deadlines, while business teams are concerned with cost-efficiency. This disconnect can complicate the adoption of new verification tools, as users do not influence purchasing decisions directly.
- Integration challenges: Large media organizations use custom, proprietary platforms, often requiring significant effort and cost to integrate with. Integration challenges can be worth it if price points justify the effort, but this is not always guaranteed.
- High ACV potential, low willingness to pay: While media organizations theoretically support high price points, few companies like CNN, AP, and Reuters are willing to pay a premium for verification tools. Mid-tier media organizations (e.g., NY Times, Washington Post) may offer better ROI but are still relatively limited in number, so the TAM isn’t huge there.
- Mismatch between stated needs and reality: There is a significant gap between what media companies claim they need (fact-checking, verification) and their actual focus, which is largely on content distribution. This reveals that their primary business is delivering stories rapidly rather than verifying or generating new content. It’s more or less the same stuff being cycled through all of them. Organizations like AP and Reuters serve as aggregators, collecting content from smaller providers and redistributing it. Their value lies in their distribution networks rather than original content production or verification.
- Disorganized and dispersed image data: One idea was to form partnerships to obtain image data, but this is impractical as media companies often lack centralized, organized data repositories. Legal, logistical, and financial barriers make it infeasible to obtain their image data, and any public data is already widely scraped for AI purposes.
Fraud and T&S
The fraud space is challenging to sell into because it is saturated with KYC/KYB and ID/document verification solutions that already use other methods (e.g. triangulating data or using manual processes) to verify the integrity of images. Whether an image is AI-generated is typically irrelevant, as it isn’t a determining factor in fraud detection. Established tools like Clear, SoCure, Persona, and Checkr, which rely on cross-referencing databases and triangulating a wide range of data sources, already provide sufficient fraud prevention, reducing the need for AI-generated image detection in this market.
Government
Government contracts are very lucrative, but there are three main challenges with this buyer: (1) federal agencies typically want to see significant traction in the private sector before committing to a contract; (2) compliance requirements such as FEDRAMP can be expensive and very costly, and finally (3) the slow sales cycle means the feedback loop is not conducive to learning quickly. The confluence of all of these factors contribute to a high barrier to entry. Over-investing in this area can pose risks to the business due to these stringent requirements and delayed adoption timelines.
Government contractors are also one adjacent option. Intelligence, security, and travel advisory companies maintain extensive networks that gather real-time information from diverse contributors and sources, including on-the-ground updates like iPhone screenshots. High-risk industries, such as oil and financial companies, subscribe to these feeds, paying substantial fees to stay informed on critical events, like terrorist attacks near pipelines or government actions. This model is similar to Palantir’s marketplace approach, where a limited number of large organizations (roughly 30 to 100 major clients) pay significant amounts for access to curated intelligence feeds that support their operational needs. That said, a lot of these players carry the same risks as federal agencies, with a slightly lower barrier to entry.
Tech
I spoke with 3 big name tech companies: DoorDash, Anthropic, and Stripe. None of these companies cared specifically about AI-generated images. They had some vision-related fraud detection pains, but this pain was not high enough to invest in a solution, and even then, they would prefer to build vs buy.
Unlike KYC or IDV/DocV companies that have opportunities for white labeling, we found that tech companies whose core focus isn’t IDV or T\&S prefer to build the long tail of verification capabilities in-house. Furthermore, T\&S teams at tech companies are largely de-prioritized over core product areas, meaning the ACV potential there tends to be low.
So, what’s next?
We built Nuanced’s first version as AI coding exploded onto the scene. Our team—a group of programming language nerds and AI skeptics—started experimenting with these new tools, and the more we dug in, the more we saw the gaps. AI can generate code fast, but reliable? Not so much. We realized static analysis could be the key to making AI-generated code trustworthy. This isn’t about patching holes—it’s about building a fundamentally smarter approach to software development. As AI starts writing more of our code, we’re creating tools that help developers (and machines) actually understand what’s happening under the hood. Stay tuned!