<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=352585001801011&amp;ev=PageView&amp;noscript=1">
Jonathan Stone

By: Jonathan Stone on December 06, 2019

Print/Save as PDF

Do We Really Need to Be Storing or Sharing this Data?

Cybersecurity | In the Media | News

Simply having or transmitting data is a risk. It’s often a necessary risk—companies have to store and share data constantly in order to do business—but it’s important to remember that if something is deleted or isn’t shared, it’s much less likely to fall into the wrong hands.

This is something we talk about with our cybersecurity clients throughout Connecticut on a regular basis. Part of our process is to take stock of who they are sharing data with and why. If there’s not a reason to be sharing a particular type of data with a particular party, then not sharing it cuts down on risk of it being breached at some point. If data isn’t in use anymore, deleting it eliminates something a hacker could get a hold of.

Recently, I had the chance to explore how this plays out on the consumer level in an interview with NBC Connecticut for an investigative piece about apps that collect 3-D images of your face.

The apps described in this news story essentially create “deep fake” videos using a 3-D image of your face. A deep fake is an artificially created digital video that very much appears to be a particular person, but is not. It’s easy to gather the data needed to create deep fake videos of famous people because there are so many images of their faces available online, you can eventually find every single angle. Last year, comedian and filmmaker Jordan Peele lent his Obama impersonation voice to a very convincing deep fake video, which helped launch this new technology into the mainstream.

When you allow an app to take photos of your face from every angle in order to place your image into movie scenes, you are essentially creating a deep fake. This may seem harmless enough, and at this point, it is. However, it’s easy to imagine a not-too-distant future in which a deep fake video of you could be used by cyber criminals. For instance, the 3-D image capture of these apps is similar to the Face ID feature on the iPhone. It might be possible sometime very soon to use a deep fake image to unlock a phone.

This example illustrates an important point when it comes to data security. It’s important not just to think about the consequences of potentially losing control of your data now, but also to imagine what might happen if that data was lost in the future. Often, data your company is sharing or storing seems so harmless, that you don’t think twice about sharing or storing it. But limiting that data could decrease the severity of a breach.

The biggest questions I had when the Capital One breach occurred earlier this year and included such a wide range of data going to back to 2005 were: Was it necessary to have been keeping all of this data? If the data older than say, 2012, had been deleted, how much smaller would the breach have been?

In the consumer world, there is growing sensitivity to how much data is too much to share. A few months ago, I was interviewed for another TV story about FaceApp, this time on WTNH News 8. The question this story raised was if the app was capturing all of the photos on your device. It wasn’t—just the ones you upload to it—but the idea that a Russia-based app was getting access to all of your photos alarmed people. For some reason, giving an app the ability to create deep fakes with your image hasn’t raised as much concern, but it probably should.

Similarly, when assessing how your business stores and shares data, it’s not enough to simply go by intuition. It takes a systematic review of your environment to make decisions that will protect you and your customers later.

New Call-to-action

About Jonathan Stone

Jon is a longtime executive within the technology industry with nearly 30 years of experience.

Suggested Posts

Visit Our Learning Center