OrgnIQ analyzes how content is presented, not whether it's true or false. We detect influence techniques — patterns in language, framing, and emotional appeals that shape how you interpret information.
Think of it like a food nutrition label: we don't tell you whether to eat something, just what's in it.
Every podcast gets the same analysis — same model, same rubric, same standards. All media uses influence techniques, and OrgnIQ applies the same methodology to all of them.
Our catalog covers a wide range of podcasts to ensure broad representation.
Most media analysis tools skip the ads. We don't. When a podcast host reads an ad, they're using the same influence techniques — emotional appeals, authority claims, urgency framing, social proof — that appear in the editorial content. The only difference is someone paid for it.
We made this choice deliberately. Ads aren't a break from influence — they're often the most concentrated form of it. A host who spends three minutes telling you a product changed their life is using the same persuasion patterns we detect everywhere else. Excluding ads would mean pretending a significant portion of what you hear doesn't count.
It also matters because the line between ad and content is disappearing. Sponsored segments, affiliate partnerships, “brought to you by” integrations — these are designed to feel like the show, not like commercials. If we excluded them, we'd be giving a pass to the exact kind of influence that's hardest to notice.
OrgnIQ treats every word the same. If it uses a technique, it gets flagged — whether it's editorial, an interview, or a mattress ad.
The OrgnIQ Score (0–100) measures how clean a piece of media is. A higher score means purer, more informational content. Two factors drive the score:
Not all techniques are equally dangerous. Loaded language is obvious — you hear “radical socialist agenda” and your brain flags it. But selective framing, narrative imprinting, and cherry-picked evidence work precisely because you don't notice them. Techniques that are harder for a listener to detect receive a 2x severity multiplier in our scoring, because the influence you can't see is the influence you can't defend against.
A score of 100 means clean, informational content. A score near 0 means heavy use of influence techniques.
We group our 32-code detection taxonomy into six families:
Most media analysis focuses on what content says. We also analyze what it does to you. Our addiction pattern detection identifies eight content-level mechanisms that drive compulsive return behavior:
These patterns are distinct from persuasion. A podcast can be factually accurate and still engineer addictive consumption. The question isn't “is this persuasive?” but “is this designed to make me come back compulsively?”
OrgnIQ isn't running a prompt against a generic chatbot. It's powered by XrÆ — a purpose-built, fine-tuned large language model trained exclusively for influence technique and addiction pattern detection.
XrÆ has been trained on millions of words of real-world media — podcasts, news articles, and editorial content from across the political spectrum. Every training sample is expert-annotated against a rigorous 32-code taxonomy covering emotional manipulation, logical fallacies, language distortion, trust exploitation, narrative framing, and addiction patterns.
The model doesn't just detect patterns — it continuously retrains on fresh media every week through a compound learning pipeline. As persuasion techniques evolve, XrÆ evolves with them. New data flows in, the model improves, and detection accuracy compounds over time.
This isn't a parlor trick. It's infrastructure — a dedicated machine running on local GPU hardware, serving analysis at hundreds of tokens per second with zero cloud dependency. No API middleman. No prompt engineering workaround. A real model, purpose-built for one job, and getting better at it every week.
Influence techniques are everywhere — in news, entertainment, marketing, and education. Awareness is the goal, not avoidance.