Thomas Figved

AI & Ethics

Where I stand on AI

I use AI in my work. I also have problems with it. This page is an attempt to hold both of those things at once.

What it's good at

AI is genuinely useful for building software. I'd be lying if I said otherwise.

It makes me faster. It helps me work with tools and frameworks I haven't used before without the ramp-up time that used to be unavoidable. When I'm stuck on a problem, it's good at surfacing solutions that already exist, patterns, libraries, approaches I might not have found on my own, or not as quickly. It doesn't replace thinking, but it's a good thinking partner.

Problems that were too complex or too slow for humans alone are becoming tractable.

It also brought back something I wasn't expecting. After years of doing the same kind of work, the craft had started to feel repetitive. Working with AI changed that. The loop is different now. I spend less time on boilerplate and more time on decisions, architecture, creative problem-solving. It made the work interesting again.

This portfolio was built with Claude. The component architecture, the quality pipeline, the test coverage. I steered the decisions, wrote the specs, reviewed every line. The AI did the heavy lifting on implementation.

Beyond software, AI is already accelerating research in medicine, climate science, and material design. Problems that were too complex or too slow for humans alone are becoming tractable. The cost is real, but so is the potential.

My tool of choice is Claude, the strongest coding model right now, with an active developer community building real workflows around it.

What it costs

That said, the cost is real.

Training a large language model requires staggering amounts of energy and water. Running one does too. A single ChatGPT query uses about five times more electricity than a web search. Multiply that by millions of users, all day, every day, and the numbers stop being abstract. Global data center electricity consumption is projected to more than double by 2026. That power generates CO2. It needs cooling, and cooling needs water. In regions already dealing with drought and drinking water stress, those liters come from somewhere. And the countries that control the data centers will set the terms for everyone else.

Then there's what it does to work. A developer joining a team used to spend weeks reading the codebase, asking questions, making mistakes that built real understanding. Now the expectation is shifting: plug in the AI, ship on day one. A Harvard Business Review survey found that 60% of organizations have already cut headcount in anticipation of AI. Not because the AI proved it could do the work. In anticipation. The jobs disappeared before the technology could actually replace them. Salaries compress. Expertise becomes a commodity. The word for this is alienation, and it's not abstract. It's happening now.

The dilemma

Here's the part I can't resolve neatly.

If no developer adopted AI tools, we'd collectively keep our leverage. Our skills would stay scarce, our positions secure. But that's not what happened. Some adopted it early. The market adjusted. And now the rest of us face a simple choice: adapt or become irrelevant, fast. It's a textbook prisoner's dilemma, and the cooperative outcome was never realistic.

I chose to accept the cognitive dissonance rather than perform a clean conscience I don't have.

The problem was never individual developers choosing to use a tool. The problem is an industry that treats people as costs to cut and a market that rewards whoever cuts them fastest. Blaming workers for adapting to survive misses the point entirely.

So I use it. I'm not pretending the contradictions aren't there. The environmental cost is real. The effect on workers is real. The power consolidation is real. But individual purity doesn't fix structural problems. Modern life runs on contradictions like this one. Your phone, your clothes, your food. AI is one more line on that list.

I chose to accept the cognitive dissonance rather than perform a clean conscience I don't have.

What I hold it to

If you're considering working with me, here's what this means in practice. I use AI, and I hold the output to the same standard as hand-written code. This site runs automated tests, Lighthouse audits for performance and accessibility, and enforces quality gates on every deploy. The results are public on the quality dashboard. The easier it gets to generate code, the more the quality of the result depends on the standards of the person behind it.

READY TO START?

Have a project in mind? Let's talk about how I can contribute.

Get in Touch