I do not and will not use generative artificial intelligence in my photography and journalism.

Generative artificial intelligence capable of creating photorealistic images without the use of a camera—in particular image generators like DALL-E, Midjourney, and Stable Diffusion—poses an existential threat to photojournalism and journalism as a whole. It’s my view that AI is antithetical to photojournalism’s principles and purposes.

 I'm a photojournalist. Both parts of that title—photo and journalist—hold equal weight for me. I became a photojournalist because I believed that photos and moving images I saw in newspapers and on TV stations showed me the world as I had not seen it. I continue to make what we once called “straight photos”—no edits or changes of images that added, removed, or distorted content in the frame— because I believe in their value and their power. I understand that my photos are not absolute truth. They are evidence of moments, visual quotations from life around me as it unfolds, encounters with the world in which we live.

Yes, I make choices about whom, what, when, where, why, and how to photograph. If you have questions or doubts about any of my photos, you can ask me. You can read what I have written, in the caption and elsewhere. You can research my 35+ year career. You can also look for photos or articles by other people who witnessed the same event. And then you can decide whether to trust me and my photos.

You can’t do that with artificial intelligence. This is why I oppose the use of AI in the creation of journalistic images. I wholeheartedly support the Statement of Principles by the Writing with Light photojournalism working group on AI, of which I am a founding member, and why I will adhere to its tenets.

Concretely, this means that I will use my digital cameras and photo software as I used my film cameras and darkroom: to represent with respect and dignity the people I photograph and to document what I witness with the utmost fidelity to the scene and personal integrity.