- What are deepfakes?
- Different categories of deepfake-related threats
- Some examples of the risks of deepfakes
- What can we do to prevent the abuse of deepfakes?
The term ‘deepfakes’ may already ring a bell, since it has popped up frequently in news coverage and tech reports in recent years, oftentimes concerning fake videos of politicians. One widely-known example of this would be the video in which former president Barack Obama is supposedly saying certain things that he never actually said. Many people at the time believed that this video was real, which shows the dangers of these deepfakes – if made convincingly, that is. Lately, this technique is also being used to defraud, con, or blackmail businesses, especially in the tech sector.
What are deepfakes?
In an increasingly online world, it is much easier to pretend to be someone you’re not. Of course, fake identities have been around for a long time, since before the internet was even invented. But the online databases where sensitive personal data is stored have made it possible for hackers to get access to an enormous amount of private information about quite literally everyone. This personal data would make impersonating another person a relatively simple undertaking, but deepfakes takes that process a step further. A deepfake is manipulated content in which, for example, the originally recorded person is swapped with someone else, or says something that they would normally never say. The concept of deepfakes is a combination of the terms ‘deep learning’ (a specific type of machine learning) and ‘fake’. In order to create deepfakes, an AI algorithm is needed, which can analyse the data that has been gathered on several levels. Audio and video recordings, for example, can be analysed in this way, with the algorithm learning everything there is to know about the person in question. With these insights, an existing visual can then be altered and used for whatever goal the scammer has in mind.
Businesses are becoming popular targets: cybercriminals create a deepfake video of a certain person, which is used to scam businesses out of money, apply for jobs within the company, or pretend to be a specific employee or even the CEO.
Different categories of deepfake-related threats
When a scammer can impersonate any other person by using deepfakes, they will be able to abuse this technique in various other ways as well. And these con artists are increasingly targetting businesses. For instance by creating a deepfake video of a certain person, and then using it to scam businesses out of money, apply for jobs within the company, or pretend to be a specific employee or even the CEO. This not only leads to miscommunication, but identity theft, distrust, and misconduct on the work floor as well. Two concrete examples of these risks will be discussed later in this article. There are several different categories of deepfake-related threats, which could all be applicable to the business sector. These are: traditional types of cybercrime (fraud or extortion), legal threats (altering electronic information and material), societal (instigating feelings of social unrest – which can also occur within a business setting) and personal harassment (misconduct at work, often aimed at women). Generally, the subtypes of legal threats and traditional cybercrime are the most visible in the context of conning companies. Scammers are increasingly trying to extort or trick businesses – or rather, their staff – into giving them money, and they are becoming more and more inventive. Not only companies themselves, but also the federal government, are growing increasingly concerned about these developments. For this reason, several laws have been implemented to protect and assist the victims of deepfakes.
Some examples of the risks of deepfakes
Here are some examples of the various ways in which deepfakes are being used to con businesses and harass or blackmail employees.
Harassment and misconduct at work
As previously explained, scammers can impersonate anybody whose details they can access. In other words, using the voice of the CEO of a certain company, they can instruct an employee to transfer money from one account to another, and the employee would be none the wiser. This would, of course, have serious consequences for the whole business, and especially for the CEO. Employees are, however, also targeted directly. Adam Forman, labour and employment attorney at the law firm Epstein Becker Green, warns about this: “One of the main things that have started happening is that deepfake videos are being ‘weaponised’ disproportionally against women. People in the workplace will place a coworker’s face on an adult film star’s body”. This is an example that mainly applies to women, but men can also fall victim to the shady use of deepfakes. They can be bullied, harassed and misled as well. The key question is, what are the employers’ responsibilities in these cases? Should they intervene and what could they possibly do against such an invisible and oftentimes untraceable attacker? Forman agrees that these are all serious concerns. “You have workplace morale issues, compliance issues with your policy, and procedures that all jump up because of deepfakes”, he explains. Hence, some type of legal framework is definitely in order.
Millions are lost to scammers who cleverly infiltrate businesses and trick employees – and in some cases even employers themselves – into transferring money to them.
Tricking businesses into making money transfers
Aside from the harassment and deception, there is also the issue of extortion and fraud. Millions are lost to scammers who cleverly infiltrate businesses and trick employees – and in some cases even employers – into transferring money to them. This happened in Hong Kong, for example, where in early 2020 a person with a similar voice to the company director’s called and emailed the branch manager. In order to actually sound like the director, the scammer used deep voice technology. The branch manager initially believed that this person was their boss reaching out to them regarding ‘the acquisition of another company’. Fortunately, the scammers did not succeed, but they very well could have if the branch manager hadn’t questioned what was going on. The voice imitation was convincing enough to pass as the director’s, and with increasingly sophisticated technology becoming available, it will be even more difficult to distinguish real voices from altered versions in the future.
What can we do to prevent the abuse of deepfakes?
Unfortunately, preventing these deepfakes from being created, distributed, and used for nefarious purposes, is not an easy feat. Scammers are getting increasingly crafty with this technology, and hacking into online data systems is a piece of cake for these fraudsters as well. It goes without saying that businesses should protect their data to the best of their ability, for example by installing high quality firewalls, encrypting their data and hiring online security experts. These measures can minimise the chances of their systems being infiltrated, although their effectiveness cannot be fully guaranteed. For this reason, companies should also be able to recognise and be prepared for any encounter with deepfakes. This requires management as well as employees to have a good understanding of what these deepfakes look like and what distinguishes them from real profiles. One important characteristic of deepfakes, for instance, is that the visuals and sound are not always completely synchronised. The audio could be slightly delayed, which generally doesn’t happen with a professionally taped video for a job interview or during a live conversation with a job applicant. It is also important to thoroughly ‘scan’ the video footage for any inconsistencies, such as misplaced shadows or certain features that do not seem to match up with the rest of the person’s appearance.
Various authorities and specialists are already providing their services to help businesses with these matters. For example, the Massachusetts Institute of Technology (MIT) has created ‘Detect Fakes’, where eight questions are posed in order to determine whether a video is real or fake. Of course, checking each bit of content for inconsistencies is not exactly a fully foolproof plan, and approaching intricate and impactful matters this way will not be enough to stop these deepfake attacks from happening. However, being prepared and informed are vital steps in learning how to deal with these illegal activities. As scammers evolve and invent new ways to hack into online systems and steal identities, the corporate world – as well as all other stakeholders – will need to evolve right along with them.
Deepfakes have turned out to be an effective way to scam businesses, and con artists are taking full advantage of this fact. By gathering data and using advanced technology to impersonate other people – both in video and audio formats – they can essentially infiltrate companies and instruct employees to do their bidding. Although this can’t always be prevented, there are ways for businesses to be prepared for these situations, and to learn how to recognise deepfakes. It is also critical to keep online security systems up-to-date, and ask for professional assistance to keep personal data inaccessible to outsiders.