You have 1 article left to read this month before you need to register a free LeadDev.com account.
Estimated reading time: 9 minutes
The possibilities of AI seem boundless but worries around environmental impact are only growing.
Especially over the past decade, ethical concerns about bias in AI algorithms, exploitive labor, where companies get training data, what AI should and shouldn’t be used for, who gets to create AI, and who gets to profit have been unfolding. As society increasingly grapples with the ethical facets of AI, so are the engineers and developers ushering in this new generation of tech.
In LeadDev’s 2025 AI impact survey, “ethical issues” emerged as one of the top problem areas for engineering organizations, cited by 45% of respondents. Developer ethical concerns about AI go back years, and now they’re expanding to new issues and levels of urgency as AI development ramps up.
“With the widespread adoption of large language models, the ethics of technology and automation have become much more tangible and immediate,” said Rumi Allbert, an engineer and philosophy professor currently teaching an AI ethics course at Fei Tan College, New York. “While fairness, bias, and transparency were important ethical concerns even before the era of LLMs, the way we interact with these models now introduces a layer of complexity that we haven’t quite grappled with before.”
Choosing the most ethical AI models – and projects
In his role as VP of engineering at cybersecurity firm Cork Protection, Marcus Recck said ethical considerations about specific AI models are shaping which models and products the company is choosing to use. He’s particularly concerned with data ethics: what data models were trained on, what data gets sent to the models, and if their data will be used to train future models.
“If you zoom out and look at what LLMs are available and what models people are consuming, I think you see that some companies are training their models to be more ethical than others,” he said, adding that Cork Protection uses Google’s Gemini because its privacy protections make them confident their data won’t be shared publicly or used for model training.
Ethical concerns around copyright and data used to train generative AI models are far-reaching. John Cranney, VP of Engineering at Secure Code Warrior, which provides a training platform to educate developers on secure coding, said that while he never thought much about copyright issues before the rise in popularity of LLMs, watching it become real rather than theoretical has put it top of mind for him.
He added that inside his company, there are several engineers who “view LLMs very unfavourably because they believe that LLM providers scraped open source libraries to build the training datasets.” The murkiness of the copyright issues is exacerbated by the fact that none of the leading LLM providers offer details about the datasets they used for training. “In fact, they treat this information as a trade secret,” he said. At the same time, he believes the pressures of the market are making it difficult for teams to prioritize the ethics of models and datasets.
“In commercial engineering contexts, teams don’t have the option of saying ‘these AI datasets may be tainted, so we won’t use them.’ Competitive pressures are too strong,” he said.
As a freelance developer, Jared White, however, is doing just that – operating under a strict “no GenAI” policy. His decision is driven by an ethical objection to training AI models on scraped data without consent, alongside a slew of other ethical concerns, including environmental damage from data centers, AI eroding people’s trust, and more. He said he’s passed on opportunities involving generative AI, including a recent departure from a long-time client after they went all-in on agentic coding and wouldn’t accept his stipulation to limit generative AI use to personal pre-production workflows only.
“I’m not willing to take on any projects where the use of GenAI is required at the team level,” he said. “If other folks use GenAI tools in their personal workflows for research or brainstorming or whatnot, I’m certainly not in a position to police them, but I don’t expect any output from GenAI tools to appear in production work like code, documents, marketing, or graphics.”
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
Lack of transparency plagues AI development and deployment
Concerns around transparency stretch far beyond copyright infringement in datasets — and it has consequences that affect the ethics of how we use models.
When the origin of the dataset is unknown, it allows biases in foundation models to proliferate. Alex Lisle, CTO at Reality Defender, a company developing deepfake detection software for enterprises, said biases deep in foundation models are one of the biggest ethical concerns surrounding AI. Countless studies have revealed biases in AI algorithms and shown they disproportionately affect women and people of color.
Without extensive knowledge of the datasets or how exactly the models determine outputs, the ability to fully understand the downstream impacts, let alone actually resolve these issues, is hindered. This is taking on increased importance as businesses and institutions, from governments to healthcare providers, quickly integrate AI into everyday processes. The stakes are high with AI now influencing decision-making, such as determining who gets approved for public assistance and what the level of care people receive in hospitals.
“The black box of LLMs should be a cause of concern,” Lisle said, adding that he worries about what information LLMs will be using to make decisions. “We don’t know what datasets the foundation models have leveraged. We don’t know how it weighs the staggering large amounts of data now being placed into the context windows. There are real ethical concerns as we place more and more decisions into the realm of agentic AI.”
More like this
Overselling and misrepresenting
For some developers, the way many executives and industry thought leaders are talking about AI has become their biggest ethical objection. Eren Celebi, principal engineer at advertising firm WPP, worries about the vast amount of “unfounded claims” execs are touting. He believes people are overselling the technology, misrepresenting how it works, and inflating what it can actually do.
“I work with a lot of executives. I think I understand their language, can speak their language, and I feel like I’m a good translator between the two sides. And I get very frustrated when they have this attitude of ‘it’s magic, it’s amazing, we’ll do anything with it,’” he said.
Celebi has seen firsthand how the overhyping of AI creates a gap in the perceived real-world capabilities of the tech. For example, when a client in the healthcare space came to his team to create an AI application that would provide pediatric nutritionists with recommendations for parents, he said his entire team was concerned. In addition to revealing how easily people are being led to trust AI in situations where there isn’t room for error, it caused an ethical dilemma for his team, who felt the use case was too high-risk.
“We’re going to be recommending what babies should eat and other supplements they should have. I was quite scared because these are stochastic, so nothing is 100% true,” he said. In the end, they successfully convinced the client to pursue an information research tool for the providers instead, which would help them parse industry research and form their own conclusions, rather than offer recommendations.
Celebi isn’t alone. White also called out the “constant overhyping of what these tools are capable of,” saying AI is actually “just very clever math.” Similarly, Lisle said that while generative AI is a remarkable tool, it’s still just a tool that needs someone to operate it.
“There is a danger that this somehow gets lost in the messaging,” he said.
Environmental concerns grow alongside data center footprints
The negative environmental impacts of data centers and training AI models regularly make headlines.
Systemic issues like environmental concerns – as well as the use of copyrighted data and exploitive labor practices – “represent some of the most challenging aspects of the current AI scene,” according to Allbert, because they’ve become foundational to how these systems are built and scaled. The increasing pace of the AI “race” has only ingrained current – dubious – practices further, as companies double down on what’s previously proved successful.
“I think [these systemic issues] have reached a scale where they’re increasingly being treated as externalities, swept under the rug as major AI labs prioritize rapid development and market positioning over these fundamental concerns,” he said, adding that the infrastructure, economic incentives, and competitive dynamics of AI are so deeply entrenched that it makes addressing these issues from their roots feel “almost impossible.”
Reports have shown that the rapid scaling of data centers used to train and run inference on AI models is driving electrical grids to the brink and using unsustainable amounts of water. Electricity, of course, powers the unprecedented amount of computing the current AI systems require. The intensity of the computing then makes the hardware so hot that it needs to be constantly cooled using water.
On this, Cranney said he expects the U.S. and China, which lead in AI development and are in a trade war over the technology, to increasingly allocate significant portions of their grids to compute for AI training and inference.
“All of those chips require energy, and we’re still living in a world of finite energy sources while trying to transition to net zero,” he said. “Those things aren’t compatible.”
The ramifications of all this are already being felt by communities, as White points out. For example, Bloomberg found that two-thirds of the 160+ new data centers built in the U.S. since 2022 are in places already struggling with water shortages.
“More and more, we are seeing local communities devastated by these projects,” he said.

October 15-17, 2025
Big names joining the New York lineup! 🔥
What will become of society (and us)?
Looming larger than specifics about how AI models are built and operate, developers also worry about the ethics of the more sweeping ways AI will impact society and the human experience.
While still believing positive change can come from AI, many are anxious about the socioeconomic and political repercussions. Cranney said he thinks a lot about the effect AI will have on unemployment and job replacement, echoing common concerns about AI taking over entry-level jobs or even causing mass unemployment.
Allbert is most preoccupied by the ethical implications tied to the evolving dynamics of human-AI interaction, in particular our tendency to anthropomorphize AI systems and the impacts of mediating all of our thinking with AI. He worries how a growing dependency on AI will lead to a decline in critical reasoning and spark implications that extend to broader societal transformations: labor distribution, mass automation, and “the profound philosophical questions that emerge regarding human purpose in a post-work society.”
“It feels like the pace of technological advancement far outstrips the progress we’re making in ethical considerations. The personal and interactive nature of these models brings about nuanced ethical questions that are still trying to be answered,” said Allbert. “In my view, the industry’s rapid development is outpacing the integration of ethical safeguards, and that’s a concern that I think we all need to address.”