Read the Beforeitsnews.com story here. Advertise at Before It's News here.
Profile image
By Science and Technology
Contributor profile | More stories
Story Views
Now:
Last hour:
Last 24 hours:
Total:

Predicting the Descent into Extremism and Terrorism: Promise, Peril, and Policy

% of readers think this story is Fact. Add your two cents.


Radicalization used to be slow—letters, meetings, sermons, pamphlets. Today, it can accelerate in hours. Platforms amplify grievance, connect would-be adherents, and wrap ideology in meme-speed narratives. Intelligence and law-enforcement agencies face a basic asymmetry: the volume of online speech is effectively infinite; human analysts are not. This gap has given rise to predictive extremism detection—a family of methods that use natural-language processing (NLP) and statistical tracking to infer whether a person’s public speech is drifting toward violent extremism.

A recent research contribution by Lane, Holmes, Taylor, State-Davey, and Wragge (2025) shows how this can work in practice. Their approach encodes written statements as vectors, classifies them (e.g., “centrist,” “extremist,” or “terrorist”), and tracks each speaker’s trajectory over time—flagging gradual drifts or sharp jumps that may presage violence. While early, the results suggest real potential for early warning. They also spotlight a minefield of risks: false positives, speech chilling, overbroad government use, and algorithmic bias.

This essay explains, in public-facing terms, what these systems do, where they help, where they can harm, and how policymakers can harness benefits without undermining civil liberties. It offers a lightly technical tour for non-technical leaders, grounded in current research and threat reporting. 


What the technology does—in plain English 1) Turning words into “coordinates”

Modern NLP models convert sentences into embeddings—numerical vectors that capture semantic meaning. Think of each sentence as a dot in a high-dimensional map where nearby dots mean similar ideas or tones. One widely used approach is the Universal Sentence Encoder (USE), introduced in 2018, which outputs a 512-number vector per sentence and transfers well to many classification tasks. Anthology)

2) Classifying rhetoric

Once you can place statements on that semantic map, you can train a classifier to distinguish categories. Lane et al. use support-vector machines (SVMs)—a standard technique—to separate regions associated with ordinary political discourse, extremist endorsement, and explicit terrorist advocacy or justification. Trained on labeled examples, such models can identify patterns that are statistically associated with each category. In their experiments, detecting explicitly terrorist rhetoric was highly accurate; detecting early extremism—a subtler signal—was harder but still promising.

3) Tracking trajectories over time

A single statement can be an outlier; what matters is movement. The research uses a tracker (conceptually similar to a Kalman filter) to smooth noisy observations and estimate a person’s latent “state of mind” as it evolves. That moving estimate lets analysts see whether a speaker is inching toward, or bouncing into, more dangerous rhetorical regions, and whether the trend is accelerating. 

4) Visualizing change for humans

The final ingredient is visual analytics. By projecting the high-dimensional map into two dimensions, analysts can view a person’s path over days or months, and compare it to group averages, leaders, or events. The display itself is not the intelligence; the trend—especially a sustained drift toward justification of violence—is.


Why this matters now

Threat reporting on both sides of the Atlantic underscores an evolving landscape. In Europe, Europol’s most recent EU Terrorism Situation and Trend Report (TE-SAT 2025) documents dozens of completed, foiled, or failed terrorist attacks across member states in 2024, alongside persistent online propaganda ecosystems. In the United States, the Homeland Threat Assessment 2025 emphasizes that domestic violent extremists and foreign terrorist organizations continue to exploit social platforms to recruit, radicalize, and call for violence. These reports do not endorse any particular predictive system, but they frame the scale and velocity of the problem such systems attempt to address. 


Where predictive tools can help
  1. Early, non-coercive intervention
    If a credible trajectory is detected early—before criminal conduct—schools, community organizations, or public-health-style programs can attempt soft interventions (counseling, exit ramps, counter-narratives). That is both ethically preferable and practically cheaper than post-attack responses.

  2. Analyst triage at scale
    No agency can read everything. A reliable model can prioritize review of accounts showing concerning trends while allowing most speech to pass untouched. The tool does not “decide” anything; it queues human review.

  3. Group-level insight
    Radicalization is social. Tracking vectors over time can reveal influence patterns—for example, when followers’ rhetoric predictably drifts after a propagandist releases new content. That enables targeted counter-messaging and community engagement rather than mass surveillance.

  4. Program evaluation
    When governments fund prevention initiatives, they need metrics beyond raw arrest counts. Aggregate trajectory measures can help evaluate whether a program correlates with de-escalation in community rhetoric.

  5. Academic clarity
    Scholars have long debated the internet’s causal role in radicalization. Reviews and meta-analyses show mixed but significant links between online ecosystems and extremist offending. Better measurement—trajectory-based rather than snapshot-based—can sharpen that literature. 


Technical realities (and limits) policymakers should understand
  1. Good at the obvious; less certain at the subtle
    Lane et al. report very strong performance when detecting overtly terrorist rhetoric in their dataset, but early extremism is fuzzier. That is intuitive: explicit praise of terrorist acts has clear linguistic markers; nascent radicalization often mimics heated but lawful political speech. Expect false positives near the boundary and false negatives where coded language or irony is used.

  2. Models inherit bias from their inputs
    Embeddings trained on large corpora can encode the biases present in those corpora. Even when technical teams test for bias, deployment to new communities, languages, or dialects can surface unexpected disparities in error rates and flagging patterns. The USE paper itself examined bias metrics; those assessments must be continuous, not one-off.

  3. Domain shift is the norm
    Extremist rhetoric evolves. Slogans mutate; euphemisms replace banned words; community norms shift. Models degrade unless they are retrained or adapted with fresh, representative data—ideally with diverse annotators and public documentation of changes.

  4. Labels are political
    Who decides what counts as “extremism” or “terrorism”? Legal definitions vary by jurisdiction and can shift with administrations. Systems that bake those labels into training data risk hard-coding political choices into code. This is not a reason to avoid modeling; it is a reason to separate technical work from policy authority and to publish the mapping between legal definitions and model classes.

  5. Ground truth is hard
    Most research, including Lane et al., relies on open-source text (e.g., speeches, posts, quotes) and expert labeling. But radicalization is a process, not a single post. To evaluate whether a system truly predicts behavior, researchers need carefully governed access to longitudinal data (with strong privacy controls) and agreed proxy endpoints (e.g., platform bans, arrests, or verified participation in violent groups). 


The civil-liberties red lines

Civil-society groups have warned for years that predictive technologies can amplify injustice and chill lawful speech. In policing, the ACLU and others have documented how prediction built on biased data reproduces bias; similar logics apply to speech-based systems. International media-freedom bodies have likewise issued guidance: if states use AI to moderate or surface content, they must protect freedom of expression, ensure transparency, and provide avenues for redress. For predictive extremism detection to be legitimate in a democracy, these critiques are not adversarial “gotchas”—they are design requirements


Guardrails that make the difference

1) Keep humans in the loop—by statute, not just policy.
Algorithms should flag, never decide. Any action that affects a person’s rights (from investigative targeting to social-service outreach) should require a documented human review with accountability.

2) Narrow purpose and separation of powers.
Specify in law what the models can be used for (e.g., triage for analyst review; not for automated detention or immigration decisions), which agencies may use them, and how judiciary or independent bodies can check misuse. Purpose limitation curbs function creep.

3) Transparency and independent audits.
Require public model cards (what data, with what bias tests, for what use), an annual public report on performance and complaints, and third-party audits with access to de-identified production data. If the law already provides oversight channels (e.g., specialized courts or inspectors general), extend their remit to algorithmic systems.

4) Due process and redress.
If a model contributes to a decision that burdens someone, that person must have an explainable basis to contest it. Even when operational security limits disclosure, policymakers can mandate structured summaries of the reasons behind flags.

5) Data hygiene and minimization.
Do not build massive shadow dossiers. Collect the minimum public data necessary; avoid scraping private data without warrants; delete data when no longer needed; and encrypt everything. Clear deletion schedules should be auditable.

6) Bias testing and community impact assessments.
Before deployment—and regularly thereafter—test for differential error rates across protected classes, dialects, and political viewpoints. Conduct community impact assessments (analogous to environmental impact statements), especially where systems may expose marginalized groups to disproportionate scrutiny.

7) Clear thresholds and calibration for action.
A model’s raw score is not a decision. Calibrate thresholds with policymakers and community partners: a low-score drift might trigger soft outreach; a sustained, high-confidence move into explicit violent advocacy might warrant analyst escalation. Put those thresholds in a public policy; do not leave them to vendor defaults.


How this fits with current threat reporting

Public threat assessments increasingly emphasize online ecosystems as accelerants. TE-SAT 2025 catalogs persistent propaganda channels associated with jihadist, right-wing, and other ideologies; DHS’s Homeland Threat Assessment details how domestic and foreign actors exploit open platforms and fringe boards alike. Predictive extremism systems are not panaceas, but they address a specific problem implied in these reports: signal extraction from torrents of content. Good governance lets agencies sift without surveilling everyone; bad governance invites overreach and backlash that ultimately reduces cooperation and safety. 


A short technical appendix (for non-technical leaders)
  • Embeddings: Models like USE translate each sentence into a vector of numbers. Similar sentences have similar vectors. The math (cosine similarity, margins) lets algorithms tell “how close” two statements are in meaning. 

  • Classifiers: An SVM draws boundaries in that vector space. Training gives it examples of each class; the model learns a surface that best separates those examples.

  • Tracking: A tracker treats each new sentence as a noisy measurement of an underlying state (the person’s current rhetorical posture). It updates the state over time, dampening overreactions to one-off outbursts and highlighting sustained drifts.

  • Evaluation: For tasks with clear language (e.g., praising terrorist attacks), models often achieve high accuracy on test sets. For subtle boundary cases—sarcasm, dog-whistles—the uncertainty is greater. Proper deployment requires confidence scores and calibration to avoid over-triggering. 


Responsible public-sector uses (and non-uses)

Appropriate uses

  • Content triage for human review in open-source intelligence units.

  • Program evaluation to see whether prevention efforts correlate with de-escalation in aggregate rhetoric.

  • Public-health style referral to community resources where lawful and transparent.

Out-of-bounds uses

  • Automated punitive actions (e.g., arrests, detention, immigration status changes) triggered by a score.

  • Secret blacklists without notice, appeal, or periodic review.

  • Generalized mass surveillance—indiscriminate scraping of private communications or bulk collection without statutory authorization and court oversight.

These lines are not abstract. Human-rights guidance stresses that any AI system touching speech must be coupled with freedom-of-expression safeguards and narrow proportionality tests. (OSCE)


Research and policy to invest in now
  1. Bilingual and dialect-fair models.
    Radicalization is multilingual. Fund research on embeddings and classifiers that perform evenly across languages and dialects—and mandate bias testing accordingly.

  2. Open datasets with ethical governance.
    Create de-identified, governed corpora for research with transparent labeling guidelines, community oversight, and strict privacy rules. This avoids dependence on opaque, vendor-owned datasets.

  3. Independent testbeds and red-team exercises.
    Standing testbeds—jointly run by civil society, academia, and government—can evaluate claims before public money is spent. Fund red-teams to probe for failure modes and disparate impact.

  4. Outcome-based metrics.
    Shift from “did the model flag something?” to “did flagged trajectories correlate with measurable prevention (e.g., engagement that reduces risk) without chilling lawful speech?” That requires closer collaboration between security agencies, social-science researchers, and communities.

  5. Clearer legal definitions and sunset clauses.
    Because labels like “extremism” are politically volatile, tie deployments to codified definitions, require sunset clauses, and force periodic legislative reconsideration informed by independent audits.


Conclusion: Prevention with restraint

Predictive extremism detection speaks to a real need: to surface faint signals of danger amid overwhelming noise. The core technical ideas—embedding language, classifying rhetoric, tracking trajectories—are not science fiction; they are here, and the basic evidence shows promise. At the same time, history warns that predictive tools can drift from prevention toward unaccountable surveillance, especially when definitions blur and oversight lags.

For policymakers, the mandate is not to choose between safety and liberty; it is to engineer both. That means guarding purpose, keeping humans in the loop, publishing what the models do and don’t do, auditing impacts, and measuring success by de-escalation, not merely by flags. Done right, these systems become modest, transparent instruments that help communities intervene earlier and more humanely. Done wrong, they become blunt tools that erode trust and, paradoxically, make prevention harder.

Safety is not a switch; it’s a system. If we’re going to predict, we must also protect—the public, the targets of algorithmic error, and the hard-won freedoms that define the societies we aim to keep safe.


References

Binder, J. F. (2022). Terrorism and the Internet: How dangerous is online radicalization? Frontiers in Psychology, 13, 997390.

Cer, D., Yang, Y., Kong, S., Hua, N., Limtiaco, N., St. John, R., Constant, N., Guajardo-Céspedes, M., Yuan, S., Tar, C., Sung, Y.-H., Strope, B., & Kurzweil, R. (2018). Universal Sentence Encoder. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (pp. 169–174). Association for Computational Linguistics.

Department of Homeland Security. (2024). Homeland Threat Assessment 2025. Office of Intelligence and Analysis.

Europol. (2025). European Union Terrorism Situation and Trend Report 2025 (EU TE-SAT 2025). Europol Public Information.

Federal Bureau of Investigation & Department of Homeland Security. (2021). Strategic Intelligence Assessment and Data on Domestic Terrorism. U.S. Government.

Lane, R. O., Holmes, W. J., Taylor, C. J., State-Davey, H. M., & Wragge, A. J. (2025). Predicting the descent into extremism and terrorism. arXiv preprint.

OSCE Representative on Freedom of the Media. (2022). Spotlight on Artificial Intelligence and Freedom of Expression. Organization for Security and Co-operation in Europe.


Source: http://terrorism-online.blogspot.com/2025/10/predicting-descent-into-extremism-and.html


Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world.

Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.

"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.

Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world. Anyone can join. Anyone can contribute. Anyone can become informed about their world. "United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.


LION'S MANE PRODUCT


Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules


Mushrooms are having a moment. One fabulous fungus in particular, lion’s mane, may help improve memory, depression and anxiety symptoms. They are also an excellent source of nutrients that show promise as a therapy for dementia, and other neurodegenerative diseases. If you’re living with anxiety or depression, you may be curious about all the therapy options out there — including the natural ones.Our Lion’s Mane WHOLE MIND Nootropic Blend has been formulated to utilize the potency of Lion’s mane but also include the benefits of four other Highly Beneficial Mushrooms. Synergistically, they work together to Build your health through improving cognitive function and immunity regardless of your age. Our Nootropic not only improves your Cognitive Function and Activates your Immune System, but it benefits growth of Essential Gut Flora, further enhancing your Vitality.



Our Formula includes: Lion’s Mane Mushrooms which Increase Brain Power through nerve growth, lessen anxiety, reduce depression, and improve concentration. Its an excellent adaptogen, promotes sleep and improves immunity. Shiitake Mushrooms which Fight cancer cells and infectious disease, boost the immune system, promotes brain function, and serves as a source of B vitamins. Maitake Mushrooms which regulate blood sugar levels of diabetics, reduce hypertension and boosts the immune system. Reishi Mushrooms which Fight inflammation, liver disease, fatigue, tumor growth and cancer. They Improve skin disorders and soothes digestive problems, stomach ulcers and leaky gut syndrome. Chaga Mushrooms which have anti-aging effects, boost immune function, improve stamina and athletic performance, even act as a natural aphrodisiac, fighting diabetes and improving liver function. Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules Today. Be 100% Satisfied or Receive a Full Money Back Guarantee. Order Yours Today by Following This Link.


Report abuse

Comments

Your Comments
Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

MOST RECENT
Load more ...

SignUp

Login