Nov 25, 2025

[ACCU 2005] Learning To Stop Writing C++ Code minute

Learning To Stop Writing C++ Code (and Why You Won’t Miss It) - Daisy Hollman - ACCU 2025
https://www.youtube.com/watch?v=mpGx-_uLPDM&t=870s

Best Practices for coding with LLMs

  • Use smaller files
  • Over-test everything.
  • LLMs are pretty good at generating tests for existing code
    • But they also pretty decent at helping you with Test-Driven Development
  • Well-contained unit tests are much easier for LLMs to reason about
  • Encapsulation is critical!

LLMs currently struggle with "long-term learning"
Whereas a human working on the same project for weeks or months can abstract away the details of a complicated workflow and "learn" which poorly encapsulated sharp edges are ignorable, LLMs currently struggle with this kind of thing. (early 2025)

In other words, code coupling is bad—don't connect dissimilar things from different units of encapsulation in unintuitive ways.

  • Naming is more important than ever
  • Intuitive abstraction design goes a long ways
  • Agents often don't know to "check" for unintuitive behavior
    ...or they might "check" sometimes and not other times
  • Writing abstractions that are easy to correctly "guess" how they work is important
  • Write better (but still concise!) comments and documentation
    QUOTE
    The compiler does not read comments and neither do I — Bjarne Stroustrup
    Maybe it's time to revise this? LLMs do read comments


High cohesion is good
  • This is the opposite of code coupling—similar things within a given unit of encapsulation should be grouped together.
  • "Don't Repeat Yourself" (DRY) coding helps make efficient use of the LLM's context window
  • Don't do unexpected things
  • Especially if those things often don't have syntax (e.g., copy constructors in C++, auto-dereferencing in Rust, non-idiomatic __getattribute__ in Python, etc.)
  • In C++, use regular types whenever possible. (read my note about regular type: https://vsdmars.blogspot.com/2018/06/c-regular-type.html , basically, design by contract, precondition)
  • Don't mix owning and non-owning semantics in the same type or template
  • Don't mix value and reference semantics in the same type or template

KEY TAKEAWAY
Writing code that LLMs will understand is not that different from writing code that humans will understand, except that we can start to understand and quantify why these best practices increase understandability.


Design by contract
  • Both contracts and effects systems are ways of encapsulating information and reducing code coupling.
  • Encapsulation is key to effectively working with LLMs because of the context window size constraints.
  • But also, it's a lot easier to train LLMs on small, well-contained problems.
  • Contracts promote Liskov Substitutability, allowing LLMs to infer behavior of a broader category of types.

An "Effects System" is a way for a programming language to track what a function does (its side effects), not just what it returns (its data type).

While C++ doesn't have a full academic effects system (like the research language Koka), it has a "pragmatic" one built into keywords you use every day.

Here is how const and noexcept act as an Effects System to help both the Compiler and LLMs.

The Concept: Labeling the "Black Box"

Without an effects system, a function signature void process(T& data) is a black box. It could do anything: write to disk, throw an error, modify global state, or format your hard drive.

An effects system puts warning labels on the box.

Why LLMs Love This (The "Context" Win)
The slide mentioned "Context Window Constraints." This is where effects systems shine for AI.

If you give an LLM this code:

C++
// Case A: No "Effects"
void transform(Data& d);

The LLM has to "hallucinate" or guess: Does this function throw? Do I need a try-catch block? Does it invalidate my iterators? It has to consider all possibilities, which wastes "reasoning tokens."

If you give it this:

C++
// Case B: Constrained Effects
void transform(Data& d) noexcept;

The search space collapses. The LLM immediately knows: No exception handling logic is needed here. It can focus its limited attention on the actual logic rather than defensive coding.


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.