Ethics in Engineering and The Rise of Deepfakes

Do Engineers Think About Technology Equally as Both Tools and as Weapons?

Technology is both tool and weapon, and as either, neither is privileged over the other. In other words, any new technology changes the ability of humans to manipulate the physical environment. Engineering applies the laws of science. The law of conservation of mass states that matter is neither created nor destroyed. But matter can be manipulated. The same holds true for energy.

Matter can be manipulated, constructed, destructed, deconstructed and so on. Every tool is a weapon of matter, and every weapon is a tool of matter. Technological inventions and innovations sometimes drastically change the way human beings interact with the environment of our planet.


Petrochemical plants like this one in Saudi Arabia have given human societies vast quantities of energy to modernize society, unintentionally (at first) creating harmful waste byproducts in order to achieve an engineering objective. (Image courtesy of Wikipedia.)

In the 20th century, humans became far more able to sustain life in our constructed societies that expand our chances of surviving to adulthood and into old age. But only by systematically converting mined physical resources into energy using techniques developed by engineers and scientists. The petrochemical revolution of the late 19th and early 20th centuries gifted human societies with huge amounts of energy. But it also cursed them with a legacy of wasteful physical byproducts in the form of greenhouse gases, for example. Engineers altered matter to produce energy, and as a byproduct produced pollution. The waste problems caused by the global energy industry are well-documented now. Threatening the future of human civilization was not the intention of inventors and engineers of the past to create cheap, efficient energy to continue building societies.

Unintended Consequences of Engineering Ingenuity

The unintended consequences that occur with technological inventions and innovations are very hard to foresee and insulate against. Depending on the urgency of a problem that a particular invention or innovation promises to solve, inarticulable future concerns of a population can be reflexively anticipated and assuaged by the promise of immediate or near-future relief.

Do engineers have an ethical responsibility to understand how their inventions and innovations are both tools and weapons? How do engineers reconcile if their work takes them outside of ethical boundaries they’ve established for themselves to maintain their own well-being?

What about the well-being of others? What happens when technological innovation and invention reaches the point it is at today in the year 2019? In the 21st century, humans are coming to terms with the fact that the computer revolution instigated a digital colonization of the physical world. This digitization of physical data has yielded powerful media tools and features that can be used to manipulate a person’s perception of reality.

How did we get here? Though the years of the 21st century have seen a staggering transformation of societies by way of technological innovation and invention, it all really started in the last quarter of the 20th century.

The Engineering Wizardry of Personal Computing Led to the Personal Data Industry

Personal computing in its infancy promised to amplify an individual’s abilities as a worker or hobbyist. There was no internet, no connectivity, and no danger of centralization if you bought Wozniak’s Apple II in 1978. Color appeared on early PC’s, then in 1984 Apple released the Macintosh with bitmapped displays, a palette of colors and applications like MacPaint. Six years later in 1990 Photoshop hit the market, and digital image manipulation took off commercially. started a few years later in 1994, Microsoft’s Windows 95 came out and created an industry standard operating system for desktop computers. PayPal and Google both started up in 1998, the dotcom bubble followed. Facebook started a few years later in 2004, followed by Twitter and Amazon Web Services in 2006. One year later, in 2007, the first iPhone came out. In 2008, Google’s Android OS became available on a smartphone from HTC. The maker movement began shortly after this with the founding of MakerBot in January of 2009 by Bre Pettis and continued its upswell with events like MakerFaire (recently closed for good) rising in popularity into 2014. Artificial intelligence startup DeepMind was founded in 2010 and was acquired by Google in 2014. Around this time, Autodesk announced that it would be moving its 3D software portfolio to the cloud, following Adobe’s move to Amazon Web Services a few years prior. Fusion 360 was partially hosted on the cloud and Onshape then came on the scene with a totally cloud-based CAD software in 2015.

Artificial intelligence started becoming a popular news topic starting in January of 2016 and now receives ubiquitous coverage.

Engineers and scientists created the technologies of the 21st century, and the people who used them generated massive amounts of data which was the raw material used for a digital alchemy that literally proves digital dominance over the physical: the data brokerage industry is now worth more than the oil industry.

Personal computing and the digitization of everything physical has outstripped the market value of the raw material of the petrochemical revolution. This is an astonishing feat of digital alchemy. Turning digital data generated by people who signed Faustian Terms-Of-Service agreements into gold had unintended consequences for American democracy.

The Transformation of Personal Digital Data: From Tool to Weapon

The Cambridge Analytica scandal of influencing the behavior of 2016 presidential election voters based on 3rd party developer access to millions of unwitting Facebook users is another unintended consequence of technological innovation. But was it an unethical use of unethical data collection if everyone signs agreements, they don’t understand to become a digital data product to be scraped and harvested by unethical actors who use social media technology to divide rather than unite?

It seems apparent that human behavior can be influenced by media that is consumed, just like matter can neither be created nor destroyed. Perhaps Cambridge Analytica happened because 21st century computing technology was intended to provide clarity and community as a cultural tool. The cultural tool was not thought of enough as a weapon, or unintended consequences that could have been prevented may have been ignored for ethically questionable reasons.

Either way, one has to accept that digital media can be altered to the point where humans might not be able to tell if the media generated is a passive capture of what actually happened in the physical environment.

We know that the will to deceive and manipulate using our digital data against us exists. As does the will to create media content with increasingly more convincing masks of authenticity in order to deceive and persuade.

Today there are some extremely gifted graphic artists who are working with extraordinarily powerful computers and sensor technology to create digitally photorealistic CGI characters, facial expressions, water movements or hair in Hollywood movies.

From Digitally Altering Images to CGI to Deepfakes

The ability to digitally manipulate images with powerful software like Adobe Photoshop has existed  for decades now. The software passed through the hands of its inventors and was marketed and sold into various industries, especially fashion, media and entertainment industries. Photographers using increasingly powerful digital cameras could suddenly alter images after a shoot to match personal or commercial standards of aesthetic beauty.

Photoshop became so popular that unlike “Google”, “photoshop” became both a verb and adjective in the popular vernacular. “It looks photoshopped”, or “oh you can see the warped lines from photoshopping it”, or “we can just photoshop those muscles in.” Images could be altered, people’s faces could be swapped out for others, you could make hybrid animals, add horse teeth to pretty much anyone or anything and so on.

Hao Li’s Pioneering Facial Mapping Work and the Rise of Deepfakes

Hao Li is an assistant professor of Computer Science at USC. Prior to joining the USC faculty, he was research lead at Industrial Light & Magic/Lucasfilm.  While at ILM, Hao not only developed next generation real-time performance capture technologies for virtual production and visual effects, he also began collaborating with Autodesk’s Reality Capture Lab in San Francisco.  Since the lab was only a 10-minute walk from work, Hao would stop in for stimulating discussions with the ReCap team who gladly lent him equipment to process Reality Capture data. From geometry processing, 3D reconstruction, performance capture, and human hair digitization to the use of his algorithms at leading visual effects studios and manufacturers of state-of-the-art radiation therapy systems. He now leads a startup called Pinscreen. In 2009, Li developed real-time facial recognition technology using 3D scanning technology. It was spun into an application called Faceshift and sold to Apple in 2015. It’s now used to create Apple’s Animoji application.

His most recent startup Pinscreen claims to use more advanced AI than deepfakes to produce photorealistic digital avatars. Interestingly, Li is also being sued by a former Pinscreen employee for faking data presented to SIGGRAPH organizers.

The Popularity of Deepfakes Continues to Rise

If you haven’t seen deepfakes by now, chances are you will sometime in the near future. Deepfakes are a combination of one subset of AI called Deep Learning and “fake” as in false. The concept is to take a video or film clip, choose an actor to “replace” with media content altered by software that uses a generative adversarial network (GAN). The GANs are used to simultaneously sift through correct matches for a replacement face based on the facial kinetics of the host face. One algorithm keeps trying to “fool” the other algorithm by continuously showing it fake images while another algorithm acts as a judge. The more the judge algorithm is fooled, the better and more convincing the deepfake.

How easy is it to create a deepfake?

Creating a First Deepfake

After searching deepfake tutorial on YouTube, one might land on this tutorial from the derpfakes channel.

This tutorial uses a software called Deep Fake Labs (DFL), which is found on GitHub. If one is using a Windows computer, there are Prebuilt Windows Releases on the repository. From here, one can select the correct version depending on the make and model of graphics card (NVIDA, AMD, Intel).

A user can now download and extracting the latest build of Deep Face Labs onto their local hard disk. There are two folders, one labeled “workspace” and the other named “_internal.” Within the “workspace” folder, there is a folder with your destination, called “data_dst”. This folder holds the frames of the original video that will be extracted. The second folder is called “data_src”, which stands for source.

The source data contains the face that will digitally replace the original one. There are two corresponding video clips for each folder, one for the destination (original face) and one for the source (replacement face). The two default clips are of actor Shia LeBeouf and Robert Downey Jr. If a user were inclined to switch out these video clips, they would have to switch the file name to correspond to either “data_src” or “data_dst”. The third folder in the “workspace” folder is labeled “model”. This folder will hold the eventual datasets from both models but not the scripts themselves.  

To begin the process of creating a deepfake, a user has to split the video clips into separate frames using a batch file. Each video clip is split into separate png files of each frame. Then, per the instructions in the tutorial, you run a series of batch files to split the images for the GPU, run deep learning training module “train H64”. During the deep learning training, you can let your computer run as long as you’d like, then you hit enter and save your model file. Then you run a series of batch files to convert the combined model from png files back into a video clip file format.

Deepfake processing

Simple deepfake processing using Deep Face Labs Windows Build. Executing batch file for sorting and matching algorithms to refine the match between two sets of images, called the source and destination images. 

Here is the deepfake I created from the tutorial. Not very good, but a solid first step which didn’t take much time.

Bottom Line

Following the 2016 U.S. Election, the ethics of technology companies have come under a sustained scrutiny. Particularly social media companies like Facebook, but Apple, Google and Microsoft all benefit from relationships to each other, and the manipulation by third-party developers Cambridge Analytica in elections around the world shows that sometimes centralizing technologies marketing with “free”, “community” and “interconnected” and created by brilliant engineers do have unintended consequences when their innovations and inventions are not properly thought of as both a tool and a weapon. 

Deepfakes present an obvious ethical issue to their continued use and development. Culturally, we may be in the middle of a future shock wave, not quite understanding how these technologies took root in reality, pick-pocketing pieces of the physical, digitizing them, quantifying them and selling them off without a second thought to unintended consequences of engineering technologies that are benevolent and constructive without safeguarding them against the inverting of purpose that runs along the double-edged sword that is innovation and invention. 

And deepfaking celebrities is one thing. I took a small step into deepfaking and made a bad one with a few hours in a couple of afternoons just by following instructions. But the process of using GAN and Tensorflow to perform complicated tasks like image swapping for video clips, then what’s to stop people from weaponizing deepfakes? 

Engineers aren’t usually asked to think ethically for their jobs, but unintended consequences of 21st century technological advances are showing that digitization might be antithetical to democracy. The question of what defines ethical engineering will likely morph considerably in the next few years, and answering it may not be possible. But safeguarding against unintended consequences against weaponization of innovations  and inventions is an engineering challenge worth taking on.