Are there any types of AI that should never be built in the first place? The “Non-Deployment Argument”—the claim that some forms of AI should never be deployed, or even built—has been subject to significant controversy recently: non-deployment skeptics fear that it will stifle innovation, and argue that the continued deployment and incremental optimization of AI tools will ultimately benefit everyone in society. However, there are good reasons to subject the view that we should always try to build, deploy, and gradually optimize new AI tools to critical scrutiny: in the context of AI, making things better is not always good enough. In specific cases, there are overriding ethical and political reasons—such as the ongoing presence of entrenched structures of social injustice—why we ought not to continue to build, deploy, and optimize particular AI tools for particular tasks. Instead of defaulting to optimization, we have a moral and political duty to critically interrogate and contest the value and purpose of using AI in a given domain in the first place.