Madeleine.Cool

Tagged “ai”

  1. Wikipedia has a guide to spotting AI writing

    The whole guide is pretty excellent, but I liked this assessment of the issues of AI content.

    LLMs (and artificial neural networks in general) use statistical algorithms to guess (infer) what should come next based on a large corpus of training material. It thus tends to regress to the mean; that is, the result tends toward the most statistically likely result that applies to the widest variety of cases. It can simultaneously be a strength and a "tell" for detecting AI-generated content.

    For example, LLMs are usually trained on data from the internet in which famous people are generally described with positive, important-sounding language. Consequently, the LLM tends to omit specific, unusual, nuanced facts (which are statistically rare) and replace them with more generic, positive descriptions (which are statistically common). Thus the highly specific "inventor of the first train-coupling device" might become "a revolutionary titan of industry". It is like shouting louder and louder that a portrait shows a uniquely important person, while the portrait itself is fading from a sharp photograph into a blurry, generic sketch. The subject becomes simultaneously less specific and more exaggerated.

    Wikipedia

  2. Can a chat bot be "moral"?

    Is AI conscious? What's happening when we see AI try to keep from being shut down or stop to "look" at cat pictures? How do we go about making a "moral" chat bot?

    “We're starting to see AI systems that don't want to be shut down, that are resisting being shut down.

    With published research saying it could blackmail the engineer that's going to shut it off if given the opportunity to do so.

    Even when ordered, allow yourself to shut down, the AI still disobeyed 7% of the time.

    So my feeling was we were out way past where theory was. You couldn't really approach these questions from a theoretical perspective, because we just didn't have enough data to be able to make categorical theoretical assessments of what was going on. But there was all this interesting experimental work happening that was just showing this is the kind of behavior that's coming out of these things.

    We should try to figure out what's going on to say, here are the things we can say with any degree of reasonable confidence for now. Here's where we draw the line and beyond that, it's all murky and speculative and we really don't know. So I wrote to a guy at Anthropic, whom I had met 10 years ago at Google, when he was 11 years old prodigy, and said, this is not about Anthropic[…]”

    From Search Engine: Mysteries of Claude, Feb 27, 2026

  3. What is homework for in an age of AI?

    This is one of the first things on AI I found really interesting. It forces us to look at the holes in our current tactics—AI, like the calculator did to math, reveals that the essays we ask students to write are about checking off boxes, not engaging with the task of writing, which is also a task of thinking and communicating.

    Maybe the formulaic, three-point, five-paragraph essay was never actually a good assignment.

    A teenager explains why he shouldn’t have to write homework essays anymore. Is there some way for adults to force teens to still do homework? Or to convince them they should want to?

    Search Engine

  4. Can design and code work on the same structure?

    Should design+code tools be trying to turn visuals into code? Or somehow facilitating the design and code working on the same thing directly?

    There’s a version of this where we end up in five years with translation tooling that actually works well. Code to canvas in milliseconds. Canvas back to code with full token fidelity. Collaboration happening fluidly across both environments.

    And we’d still be running two parallel systems, kept in sync by increasingly sophisticated automation, with all the maintenance overhead and structural drift that creates.

    The other version is one where the design system is the substrate both environments read from directly. Where a decision made in one place propagates to the other not through capture or export or handoff, but because they’re drawing from the same structure. Where AI agents generating UI are composing from a system that knows what it is, not generating approximations from visual reference.

    I know which version I think we should be building toward. I’m not sure the tooling announcements of the past six months are pointing there.

    The roundtrip is getting smoother. I’m more interested in why we need the trip at all.

    Murphy Trueman

  5. the more your typography responds to the viewport, the less it will respond to user preferences

    One action (zooming) changes the size of a pixel, while the other (resizing) changes the size of the browser itself – but both change the number of pixels across the width of the browser. As the window gets smaller, or the pixel gets larger – there are fewer pixels in the viewport.

    That disconnect makes responsive typography unreliable. If your text is set to resize based only on a viewport or container, then the user zoom will have no effect! Similarly, neither 1vw nor 100vw accounts for the user default font-size.

    Stephan Schwab

    This whole video from Oddbird was worth watching to see Miriam work through the reasoning. In addition to giving me much to think about in terms of writing good css for website visitors to have good experiences, I'm left wondering if it is possible for AI to have this sort of wondering. Can an LLM ask "hmm, I wonder how this approach to font sizing actually impacts with user preferences and zoom?"

  6. Rachel Andrew - Generative AI has broken the subject matter expert/editor relationship

    Generative AI has broken that contract. Increasingly writers receive content that looks polished, yet contains inaccuracies. This can be because the SME, while polishing their content using AI tools, has missed the fact that the tool has also modified some code or changed the meaning of text. It can also be that the drive for productivity with these tools has meant that people are being asked to cover broader subject areas, so are relying on AI tools for research rather than their own knowledge. AI can be very confidently wrong, and if the text seems clear, it’s possible to miss that it’s clearly nonsense.

    Rachel Andrew

    Whether productivity gains have broadened who we ask to be experts or experts are missing the polishing effect of an AI "edit", the human editors have to do much more to ensure good content is good and true.

  7. AI as intern

    He told me that AI right now is like having a little assistant to boss around and make you some stuff so you can say, 'Most of this is garbage, but I can use this part, and you’ve given me something to work with or against.'

    —Austin Kleon, AI as intern

See all tags.