My Values & Virtues for Using AI

I built an AI system while my business partner was in Japan. When he got back, he wasn't impressed. He was concerned.

Alex Hillman
Written by Alex Hillman
Collaboratively edited with JFDIBot
JFDI

Skip to the framework →

It was the first week of October 2025, and my business partner Adam was about to leave for a much deserved two-week vacation in Japan. On his way out the door, he joked that I’d probably replace him by the time he got back.

He knew I’d been building…something.

After a number of months building internal tools with Cursor, I was working on something pretty different.

Even with just Sonnet 4.5, it was clear that this tool had the potential to help me build something I’d dreamed about for nearly 2 decades.

That something was a very early version of my JFDI system. At the time it was barely more than a fancy bookmark manager.

While Adam was away, I built the first version of my AI executive assistant.

And the day he returned, I couldn’t wait to show Adam what it could do.

He was…less than enthused.

I let my excitement get the best of me, and for a brief moment, Adam may have thought that I had replaced him.

Thankfully, we’ve worked together for 15 years. Adam is as much a part of Indy Hall as I am, some days even more. He knows I wouldn’t actually replace him.

I couldn’t replace him.

But he did have real concerns

Many of our shared concerns center on privacy, transparency, safety, & climate.

Others are more about how these tools impact the humans that use them, and that means for the people we care about in our community.

Adam knows me well enough to know that I’ve done my homework on all of these concerns, and that I’d made informed decisions. We broadly share the same concerns, especially the ones where people we care about could be harmed. I’d like to write more about my understanding of each one, individually.

But Adam suggested that I do something else first.

Revisit my Values & Virtues

Many years ago, while Indy Hall was going through a big growth phase, I wrote a framework we called the Values & Virtues of Indy Hall.

It was built around the Greek concept of Arête. Instead of a vague and squishy “mission statement” we defined our constraints (what we won’t do) and our imperatives (what we must always do).

Said another way: the lines we won’t cross, and commitments we always keep.

This framework provided a living document, always open to change, but the rule was that change had to be intentional.

In many ways, Adam and I have used this framework (and some related evolutions) as the foundation of our working relationship.

It’s guided how we run our community for over a decade. It’s given us language to make hard conversations easier, and untangle complex decisions into clear action.

We’ve used it every time it felt like we were making decisions in uncharted territory, or where the existing charts didn’t make sense to us.

By writing it down, it forced us to confront a decision less from a place of personal opinion and more from a place of “is this aligned with what we wrote down? if not, why, and should we change it?”

There are objectively bad choices to make in this world, but the worst choice of all is blindly accepting default options when others exist or can be created.

So Adam wisely suggested - if it could be such a valuable tool for other complex decisions, maybe the same could apply to the uncharted territory of using AI?

Challenge Accepted

What you’ll find below is a public snapshot of a living document based on the framework that’s served us for all of these years.

These bullets are far from comprehensive, but I believe that they are directionally sound and will give us a good starting point as new bullets we add over time.

This isn’t a PR position. Each bullet came from real decisions that I made after research, consideration, and real conversations with people I trust and care for.

Like my original framework, each set of points is organized into:

  • imperatives - our commitments to ourselves and others
  • constraints - lines we’ve decided we won’t cross

In this new framework, I’ve added a third section for open questions. These are things we haven’t figured out yet, and we think saying that honestly is important.

I wrote these for me personally, and with Indy Hall and Stacking the Bricks in mind.

But as I’ve showed it to others who are navigating similar tensions, I thought it might serve others as well.

And let’s be honest: if you’re running a business, a community, a nonprofit, or even a side project in 2026, I bet you’re navigating some of these tensions too.

So we’re releasing these under a Creative Commons license.

Take what’s useful. Adapt what needs adapting. Throw out what doesn’t fit.

All I ask for is attribution, and that you share back improvements if you make them!


My AI Usage Imperatives: What We Always Do

1. Keep Humans at the Center

Always prioritize human creativity, relationships, and decision-making.

Use AI to remove drudgery and free up time to form and deepen real, authentic relationships.

2. Be Transparent

Disclose when AI has made a significant contribution to a piece of work or communication.

Use transparency to build trust rather than compliance.

3. Act Ethically & Intentionally (Whenever Possible)

Whenever possible, choose AI tools and partners that align with your values.

Acknowledge that perfect alignment isn’t always possible, but choose the better option whenever you can, even if it comes at greater cost.

Revisit these principles regularly to stay current and responsible as technology evolves.

4. Focus on Practical Improvements

Use AI to support consistency, help get unstuck, and stay on task toward meaningful goals.

The goal isn’t flash or novelty. It’s steady progress and sustained follow-through.

5. Build Respect & Shared Learning

Assume good intent from individuals using AI.

Share what you learn openly and encourage honest conversation about its use.

6. Use AI for Support, Not Substitution

Let AI assist in decision-making or creative processes, but ensure humans make final calls and use AI to strengthen real, authentic relationships, not replace them.

7. Stay in Control

Favor tools that allow portability, self-hosting, and independence from lock-in.

Maintain control over your data, privacy, and adaptability.

8. Understand the Tools You Use

Commit to learning how AI systems work to use them effectively and responsibly.

Treat curiosity and comprehension as part of using these tools responsibly.

9. Keep AI Optional

Offer AI as an aid, not a requirement.

Respect when someone chooses the slower, more human path to build connection through presence and authenticity.

10. Bridge Online Tools to Offline Goals

Look for ways to use online AI tools to accomplish offline tasks and goals: better connecting and listening with people, finishing physical space projects, strengthening in-person community.

11. Think in Worksheets & Templates

Whenever possible, think in worksheet/template formats that can be personalized to the person filling them out, and/or the context that the worksheet/template is being used.


My AI Usage Constraints: What We Never Do

1. Never Replace Human Connection

Don’t use AI to substitute for real interactions. AI can support relationships, but never stand in for them.

2. Never Misrepresent AI Work as Human

If AI played a meaningful creative role, disclose it clearly.

No ghostwriting, no hidden automation.

3. Never Imitate or Steal Others’ Work

Use AI to learn from others’ work, not to copy or mimic their style or content.

4. Never Use AI Just Because You Can

Avoid using AI out of trendiness, curiosity, or convenience when the human way builds stronger, more authentic relationships.

Don’t use AI in ways that violate people’s privacy or proceed without informed consent.

Respect people’s boundaries.

6. Never Automate Away Empathy or Emotional Presence

Don’t let AI replace human care. Efficiency should never come at the expense of empathy.

7. Never Let AI Erode Human Skill or Agency

Avoid reliance that dulls critical thinking or creativity.

AI should enhance capability, not diminish it.

8. Never Defer Moral Judgment to Machines

Don’t let “the AI said so” become a justification.

Humans remain responsible for choices, outcomes, and the quality of their real, authentic relationships.

9. Never Ignore Ethical Red Flags

Don’t use tools that violate your values or harm others. Convenience is never worth it.

10. Never Ship Generated Text in Final Product

Don’t ship AI-generated text directly in final products.

Whenever possible, remove the possibility entirely by designing systems that prevent unedited AI output from reaching end users.


Open Questions Worth Discussing

At the moment, these are areas worth exploring as things continue to change and new use cases become possible.

Again, these are NOT comprehensive, so the absence of anything from this list is not an indication that it is not worth discussing.

Where is the line between dictation, and AI writing?

  • What is the meaningful difference between dictation (transcribing your words exactly) and AI processing that rewrites, reorganizes, or suggests “improvements” to your language?

  • Where should we draw boundaries around authentic voice versus AI-assisted output?

  • When, if ever, is it appropriate to use my own writing to train or fine-tune AI models to be more effective?

  • How do you handle consent and disclosure for writing that has mixed origin/provenance between human and AI assistance?

Accessibility vs. Dependency

  • How can these tools increase access for people who have previously been excluded? What ways can we ensure that existing gaps narrow, rather than widen?
  • How do you balance using AI tools while avoiding over-reliance that erodes skills?

Remember, this is just a snapshot from a living document. I expect that these ideas WILL change as we learn more and as the technology evolves.

If you find them useful or inspiring, or use them as a starting point for your own variation, I’d like to hear what you change. The best version of this is the one we write together.


Originally developed for Indy Hall, my coworking community in Philadelphia, PA.

Licensed under CC BY 4.0 . You’re free to share and adapt with attribution.

← All posts