When it comes to information systems, privacy is easier said than engineered…
Privacy is a huge concern for organisations that operate in the digital sphere. AI, machine learning, biometrics, and the expansion of connectivity have become unavoidable aspects of business, spurring conversations about risk, surveillance, data ethics, and ownership. The foundations of privacy engineering were laid out in a string of privacy laws dating back to the 1970s, but the privacy debate is now fiercer than ever. Because of this, privacy engineering has gone from recommendation to requirement.
In privacy engineering, privacy considerations span the entire development process. This approach is known as ‘Privacy-by-Design‘. Prior to GDPR and the maturity of good data practice, engineering and privacy had a difficult relationship. Without legal guidance, technical tools, and an understanding of privacy issues, developers focused on minimising effort and time to market. This often meant reusing inadequate systems and protocols, leading to security flaws and compromised privacy. Rather than viewing privacy as an afterthought, privacy engineering sees it as a constant priority.
The renewed interest in privacy engineering has been catalysed by data regulations and shifting concerns about the processing, provenance, and purpose of information. Businesses that don’t place privacy at the top of the operational agenda will pay the price. Fortunately, privacy debates are taking place across different disciplines and sectors. These cross disciplinary conversations are fundamental to effective privacy engineering. In order to fully understand privacy issues, data scientists and developers have to work closely with policy makers.
As with all privacy and security initiatives, the requirements surrounding privacy engineering largely depend on local jurisdiction. If information systems are to reliably protect data, cross disciplinary conversations need to cross borders, too.
For a new AAG topic every week, sign up for our free newsletter.