Deepfakes: Becoming an accomplice to cybercrime!

Deepfakes are an escalating cybersecurity threat to enterprise organizations. Today, cybercriminals are investing heavily in deepfakes such as artificial intelligence and machine learning to create, synthesize or manipulate digital content (including images, video, audio and text) for cyberattacks and fraud. This technology can realistically reproduce or alter appearance, voice, demeanor or vocabulary in order to deceive victims into believing that what they see, hear or read is authentic.

In March 2021, the FBI warned that in existing spear-phishing and social engineering campaigns, malicious actors are using synthetic or manipulated digital content to conduct cyberattacks that are gaining momentum. Given the complexity of the synthetic media used, this could have more severe and widespread effects. Therefore, enterprise organizations must recognize this growing cyber threat of deepfakes and take effective measures to defend against deepfakes-enhanced cyber-attacks and fraud.

Cybercriminals are using deepfakes

Mark Ward, senior research analyst at the Information Security Forum, said, “It’s often said that porn drives technology adoption, and that’s true for deepfakes. Now, the technology is taking off in other areas—especially in organized cybercriminal groups.”

In fact, deepfakes-derived cyberattacks are rare, they are usually carried out only by professional gangs or government-backed gangs behind them, and there are currently only a few documented success stories. However, when the tools, techniques and potential rewards become widely known, it will spread like all such techniques.

As it turns out, this is the case now in dark web forums where criminals are sharing deepfakes technology and expertise. Cloud infrastructure vendor VMware researchers have found numerous dark web tutorials introducing deepfakes tools and techniques, suggesting that threat actors have turned to the dark web to provide customized services and tutorials that combine security measures designed to bypass and subvert Visual and audio deepfakes technology.

Deepfake is an enhanced social engineering technique

Mark Ward cited evidence, including dark web chats, that criminal groups specializing in sophisticated social engineering are increasingly interested in deepfakes. These gangs tend to use deepfakes technology to conduct business email fraud (BEC) campaigns, tricking finance and accounting personnel of large corporate organizations into transferring funds to accounts controlled by the scammers. The tools currently being discussed in crime chatrooms are said to be able to use public profiles of senior executives to obtain video, audio and blog posts to create convincing false information that would then be used to post messages to transfer cash or make quick payments. Require.

Content generated by cybercriminals using deepfakes technology can reconstruct identifiable characteristics (such as someone’s accent and speech style) and provide additional credibility. With the help of deepfakes technology, it is easier for attackers to carry out attacks, and such attacks are more difficult to prevent. Deepfakes audio simulations have proven to be particularly effective in social engineering attacks that track access to corporate data and systems — it can be used by impersonating executives who are traveling or out of the office to ask victims to reset their passwords or execute programs that allow fraudsters to access corporate assets. operation, which is also a common trick used by cybercriminals to use deepfakes technology to carry out fraud.

Given that cybercriminals are taking advantage of employees working remotely from home, such attacks will only increase in the future. Today, we are already witnessing deepfakes being used in phishing attacks or disrupting commercial emails and platforms such as Slack and Microsoft Team, among others. Phishing campaigns conducted by criminals through commercial communication platforms provide an ideal delivery mechanism for deepfakes, as organizations and their users unconsciously trust them.

Deepfakes aim to bypass biometric authentication

Another risky trend of deepfakes is the creation of content that can be used to bypass biometric verification. Currently, biometric technologies such as facial and voice recognition provide an additional layer of security that can be used to automatically verify someone’s identity based on their unique characteristics. However, deepfakes, which can accurately reproduce a person’s appearance or voice, successfully circumvent such authentication techniques, creating significant risks for organizations that rely on biometrics for their identity and access management strategies. Criminals are currently investing in this technology in a widespread remote work environment.

The Covid-19 pandemic and the advent of remote working have spawned vast amounts of audio and video data that can be fed into machine learning systems to create compelling replicas.

Albert Roux, vice president of anti-fraud at identity and authentication firm Onfido, acknowledged that deepfakes do pose a significant risk to biometric-based authentication. He explained, “Any organization that leverages authentication to conduct business and protect itself from cybercriminals can be vulnerable to deepfakes. Fraudsters have noticed some popular videos such as Tom Cruise’s Deepfakes as well as popular YouTube Creators (such as Corridor Digital), etc., and utilize deepfakes tools and code bases to bypass online authentication checks. In addition, some free open source applications also allow fraudsters with limited technical knowledge to more easily generate deepfakes videos and photos.”

Defense Against Deepfakes Cyber ​​Threats

Fraudsters invest in deepfakes to distort digital reality for illicit gains, whether through text, voice or video, and the technology is thriving in a chaotic and uncertain environment.

While the threat posed by cyberattacks leveraging deepfakes may seem serious, organizations can still defend against them with a variety of measures, including training and education, advanced technologies, and threat intelligence, all of which are aimed at countering malicious deepfakes Activity.

First, educating and training employees about deepfakes social engineering attacks, especially those that are the most targeted, is an important factor in mitigating risk, and it is imperative to focus on finance employees and alert them to this possibility sex and allow it to slow down the payment process when in doubt.

Secondly, in terms of technology, it is recommended that enterprise organizations deploy more analysis systems to detect abnormal behaviors in a timely manner. Likewise, threat intelligence can help, as it can show whether an organization is being targeted, a department is being monitored, or a particular group is becoming active in this area. Deepfakes take time to set up and execute, giving potential victims plenty of time to spot warning signs and take action.

In addition, enterprise organizations can also achieve effective defense by randomly assigning user instructions. Because deepfakes creators cannot predict thousands of possible requests, such as looking in different directions or reading a phrase, etc. While cybercriminals can manipulate deepfakes in real-time, the quality of the video is significantly degraded because the processing power required by deepfakes technology makes it impossible to react quickly. In this case, users who repeatedly respond to errors can be flagged for further investigation.

Deepfakes are an escalating cybersecurity threat to enterprise organizations. Today, cybercriminals are investing heavily in deepfakes such as artificial intelligence and machine learning to create, synthesize or manipulate digital content (including images, video, audio and text) for cyberattacks and fraud. This technology can realistically reproduce or alter appearance, voice, demeanor or vocabulary in order to deceive victims into believing that what they see, hear or read is authentic.

In March 2021, the FBI warned that in existing spear-phishing and social engineering campaigns, malicious actors are using synthetic or manipulated digital content to conduct cyberattacks that are gaining momentum. Given the complexity of the synthetic media used, this could have more severe and widespread effects. Therefore, enterprise organizations must recognize this growing cyber threat of deepfakes and take effective measures to defend against deepfakes-enhanced cyber-attacks and fraud.

Cybercriminals are using deepfakes

Mark Ward, senior research analyst at the Information Security Forum, said, “It’s often said that porn drives technology adoption, and that’s true for deepfakes. Now, the technology is taking off in other areas—especially in organized cybercriminal groups.”

In fact, deepfakes-derived cyberattacks are rare, they are usually carried out only by professional gangs or government-backed gangs behind them, and there are currently only a few documented success stories. However, when the tools, techniques and potential rewards become widely known, it will spread like all such techniques.

As it turns out, this is the case now in dark web forums where criminals are sharing deepfakes technology and expertise. Cloud infrastructure vendor VMware researchers have found numerous dark web tutorials introducing deepfakes tools and techniques, suggesting that threat actors have turned to the dark web to provide customized services and tutorials that combine security measures designed to bypass and subvert Visual and audio deepfakes technology.

Deepfake is an enhanced social engineering technique

Mark Ward cited evidence, including dark web chats, that criminal groups specializing in sophisticated social engineering are increasingly interested in deepfakes. These gangs tend to use deepfakes technology to conduct business email fraud (BEC) campaigns, tricking finance and accounting personnel of large corporate organizations into transferring funds to accounts controlled by the scammers. The tools currently being discussed in crime chatrooms are said to be able to use public profiles of senior executives to obtain video, audio and blog posts to create convincing false information that would then be used to post messages to transfer cash or make quick payments. Require.

Content generated by cybercriminals using deepfakes technology can reconstruct identifiable characteristics (such as someone’s accent and speech style) and provide additional credibility. With the help of deepfakes technology, it is easier for attackers to carry out attacks, and such attacks are more difficult to prevent. Deepfakes audio simulations have proven to be particularly effective in social engineering attacks that track access to corporate data and systems — it can be used by impersonating executives who are traveling or out of the office to ask victims to reset their passwords or execute programs that allow fraudsters to access corporate assets. operation, which is also a common trick used by cybercriminals to use deepfakes technology to carry out fraud.

Given that cybercriminals are taking advantage of employees working remotely from home, such attacks will only increase in the future. Today, we are already witnessing deepfakes being used in phishing attacks or disrupting commercial emails and platforms such as Slack and Microsoft Team, among others. Phishing campaigns conducted by criminals through commercial communication platforms provide an ideal delivery mechanism for deepfakes, as organizations and their users unconsciously trust them.

Deepfakes aim to bypass biometric authentication

Another risky trend of deepfakes is the creation of content that can be used to bypass biometric verification. Currently, biometric technologies such as facial and voice recognition provide an additional layer of security that can be used to automatically verify someone’s identity based on their unique characteristics. However, deepfakes, which can accurately reproduce a person’s appearance or voice, successfully circumvent such authentication techniques, creating significant risks for organizations that rely on biometrics for their identity and access management strategies. Criminals are currently investing in this technology in a widespread remote work environment.

The Covid-19 pandemic and the advent of remote working have spawned vast amounts of audio and video data that can be fed into machine learning systems to create compelling replicas.

Albert Roux, vice president of anti-fraud at identity and authentication firm Onfido, acknowledged that deepfakes do pose a significant risk to biometric-based authentication. He explained, “Any organization that leverages authentication to conduct business and protect itself from cybercriminals can be vulnerable to deepfakes. Fraudsters have noticed some popular videos such as Tom Cruise’s Deepfakes as well as popular YouTube Creators (such as Corridor Digital), etc., and utilize deepfakes tools and code bases to bypass online authentication checks. In addition, some free open source applications also allow fraudsters with limited technical knowledge to more easily generate deepfakes videos and photos.”

Defense Against Deepfakes Cyber ​​Threats

Fraudsters invest in deepfakes to distort digital reality for illicit gains, whether through text, voice or video, and the technology is thriving in a chaotic and uncertain environment.

While the threat posed by cyberattacks leveraging deepfakes may seem serious, organizations can still defend against them with a variety of measures, including training and education, advanced technologies, and threat intelligence, all of which are aimed at countering malicious deepfakes Activity.

First, educating and training employees about deepfakes social engineering attacks, especially those that are the most targeted, is an important factor in mitigating risk, and it is imperative to focus on finance employees and alert them to this possibility sex and allow it to slow down the payment process when in doubt.

Secondly, in terms of technology, it is recommended that enterprise organizations deploy more analysis systems to detect abnormal behaviors in a timely manner. Likewise, threat intelligence can help, as it can show whether an organization is being targeted, a department is being monitored, or a particular group is becoming active in this area. Deepfakes take time to set up and execute, giving potential victims plenty of time to spot warning signs and take action.

In addition, enterprise organizations can also achieve effective defense by randomly assigning user instructions. Because deepfakes creators cannot predict thousands of possible requests, such as looking in different directions or reading a phrase, etc. While cybercriminals can manipulate deepfakes in real-time, the quality of the video is significantly degraded because the processing power required by deepfakes technology makes it impossible to react quickly. In this case, users who repeatedly respond to errors can be flagged for further investigation.

The Links:   6MBI75U2A-060 LQ190E1LX78

Author: Yoyokuo