Tag: LLM tokenization

Logit Bias and Token Banning in LLMs: How to Control Outputs Without Retraining

Logit Bias and Token Banning in LLMs: How to Control Outputs Without Retraining

Logit bias and token banning let you steer LLM outputs without retraining. Learn how to block unwanted words, avoid model workarounds, and apply this technique safely in real-world AI systems.

Read More

Recent Post

  • Benchmarking Vibe Coding Tool Output Quality Across Frameworks

    Benchmarking Vibe Coding Tool Output Quality Across Frameworks

    Dec, 14 2025

  • Evaluating New Vibe Coding Tools: A Buyer's Checklist for 2025

    Evaluating New Vibe Coding Tools: A Buyer's Checklist for 2025

    Feb, 18 2026

  • Data Collection and Cleaning for Large Language Model Pretraining at Web Scale

    Data Collection and Cleaning for Large Language Model Pretraining at Web Scale

    Dec, 30 2025

  • Prompt Hygiene for Factual Tasks: How to Write Clear LLM Instructions That Don’t Lie

    Prompt Hygiene for Factual Tasks: How to Write Clear LLM Instructions That Don’t Lie

    Sep, 12 2025

  • Pair Reviewing with AI: How Human + Machine Code Reviews Boost Maintainability

    Pair Reviewing with AI: How Human + Machine Code Reviews Boost Maintainability

    Sep, 24 2025

Categories

  • Artificial Intelligence (55)
  • Cybersecurity & Governance (18)
  • Business Technology (4)

Archives

  • March 2026 (8)
  • February 2026 (20)
  • January 2026 (16)
  • December 2025 (19)
  • November 2025 (4)
  • October 2025 (7)
  • September 2025 (4)
  • August 2025 (1)
  • July 2025 (2)
  • June 2025 (1)

About

Artificial Intelligence

Tri-City AI Links

Menu

  • About
  • Terms of Service
  • Privacy Policy
  • CCPA
  • Contact

© 2026. All rights reserved.