Home

How OrgnIQ Works

What OrgnIQ Measures

OrgnIQ analyzes how content is presented, not whether it's true or false. We detect influence techniques — patterns in language, framing, and emotional appeals that shape how you interpret information.

Think of it like a food nutrition label: we don't tell you whether to eat something, just what's in it.

Transparent by Design

Every podcast gets the same analysis — same model, same rubric, same standards. All media uses influence techniques, and OrgnIQ applies the same methodology to all of them.

Our catalog covers a wide range of podcasts to ensure broad representation.

Yes, We Analyze the Ads Too

Most media analysis tools skip the ads. We don't. When a podcast host reads an ad, they're using the same influence techniques — emotional appeals, authority claims, urgency framing, social proof — that appear in the editorial content. The only difference is someone paid for it.

We made this choice deliberately. Ads aren't a break from influence — they're often the most concentrated form of it. A host who spends three minutes telling you a product changed their life is using the same persuasion patterns we detect everywhere else. Excluding ads would mean pretending a significant portion of what you hear doesn't count.

It also matters because the line between ad and content is disappearing. Sponsored segments, affiliate partnerships, “brought to you by” integrations — these are designed to feel like the show, not like commercials. If we excluded them, we'd be giving a pass to the exact kind of influence that's hardest to notice.

OrgnIQ treats every word the same. If it uses a technique, it gets flagged — whether it's editorial, an interview, or a mattress ad.

The OrgnIQ Score

The OrgnIQ Score (0–100) measures how clean a piece of media is. A higher score means purer, more informational content. Two factors drive the score:

  • Additive density (60%) — how many techniques relative to content length
  • Weighted severity (40%) — how impactful the detected techniques are, with invisible techniques weighted more heavily

Not all techniques are equally dangerous. Loaded language is obvious — you hear “radical socialist agenda” and your brain flags it. But selective framing, narrative imprinting, and cherry-picked evidence work precisely because you don't notice them. Techniques that are harder for a listener to detect receive a 2x severity multiplier in our scoring, because the influence you can't see is the influence you can't defend against.

A score of 100 means clean, informational content. A score near 0 means heavy use of influence techniques.

Six Additive Families

We group our 32-code detection taxonomy into six families:

  • Emotional — appeals to fear, outrage, or sentiment
  • Faulty Logic — flawed reasoning presented as sound argument
  • Loaded Language — word choices that subtly shape perception
  • Trust Manipulation — manufactured credibility or loyalty exploitation
  • Framing — how stories are constructed to lead to conclusions
  • Addiction Patterns — content-level mechanisms designed to drive compulsive consumption rather than inform

Addiction Pattern Detection

Most media analysis focuses on what content says. We also analyze what it does to you. Our addiction pattern detection identifies eight content-level mechanisms that drive compulsive return behavior:

  • Open loops — deliberately leaving stories incomplete to compel return
  • Rage bait — anger engineered as the primary engagement driver
  • FOMO induction — anxiety about being uninformed if you stop consuming
  • Parasocial dependency — building a pseudo-relationship that makes leaving feel like abandoning a friend
  • Serial dependency — structuring content so each piece feels incomplete without the others
  • Variable reward — unpredictable high-arousal payoffs that create slot-machine consumption patterns
  • Urgency manufacturing — artificial time pressure on content that isn't actually perishable
  • Identity lock-in — making consumption a marker of who you are, so stopping feels like self-betrayal

These patterns are distinct from persuasion. A podcast can be factually accurate and still engineer addictive consumption. The question isn't “is this persuasive?” but “is this designed to make me come back compulsively?”

Powered by XrÆ

OrgnIQ isn't running a prompt against a generic chatbot. It's powered by XrÆ — a purpose-built, fine-tuned large language model trained exclusively for influence technique and addiction pattern detection.

XrÆ has been trained on millions of words of real-world media — podcasts, news articles, and editorial content from across the political spectrum. Every training sample is expert-annotated against a rigorous 32-code taxonomy covering emotional manipulation, logical fallacies, language distortion, trust exploitation, narrative framing, and addiction patterns.

The model doesn't just detect patterns — it continuously retrains on fresh media every week through a compound learning pipeline. As persuasion techniques evolve, XrÆ evolves with them. New data flows in, the model improves, and detection accuracy compounds over time.

This isn't a parlor trick. It's infrastructure — a dedicated machine running on local GPU hardware, serving analysis at hundreds of tokens per second with zero cloud dependency. No API middleman. No prompt engineering workaround. A real model, purpose-built for one job, and getting better at it every week.

What OrgnIQ Does NOT Do

  • Fact-check claims or verify accuracy
  • Judge political ideology or tell you what to think
  • Rate content as “good” or “bad”
  • Suggest you stop listening to any podcast

Influence techniques are everywhere — in news, entertainment, marketing, and education. Awareness is the goal, not avoidance.

Powered by XrÆ 6.14

Purpose-built AI for influence technique detection