
Drawing the lines for data, ethics and regulation

Digital cities have improved the quality of daily life and the functionality of city systems, but did the rush to market result in unethical decisions at the expense of people’s data privacy, and if so, how can we mitigate those threats?
While data protection laws provide legal requirements for processing personal data, there is a gray area regarding the interpretation and application of these laws. Some may interpret laws in ways that could benefit their agenda. This is where ethical boundaries, together with proper privacy and security controls such as supplier due diligence, are a crucial part of the foundational framework.
The European Union General Data Protection Regulation (GDPR) regulates global organizations that collect and use the personal data of residents of the European Union and the United Kingdom. Article 9 of the GDPR forbids using sensitive personal data (for example, race or biometrics) without a lawful basis such as legitimate interest or consent.
Recital 71 of the GDPR restricts the use of personal data in profiling. When personal data is used in profiling, the models must be non-discriminatory and accurate, and the data be secured. The model developer must be ready to explain any adverse automated decisions made on behalf of a data subject.
However, the regulation does not prescribe how models should be set up or what data to use to limit discrimination, nor does it go into great detail about how the data is to be secured. The regulation leaves it up to the organization to decide these factors, which then leads to interpretation and applicability issues. With no governance framework standardizing the process, the organization could face ethical or privacy pitfalls.
Regulating artificial intelligence
Faced with rapid innovative technology and artificial intelligence (AI), the European Commission proposed a first-ever framework on AI. The commission proposed regulation, known as the "Artificial Intelligence Act," which was announced in April 2021 and is the first regional attempt to regulate the use of AI.
It is expected that the regulation will become effective in 2024 or 2025 and will apply to markets that make AI available in the European Union, use AI in the Union or whose outputs from AI affect people in the EU. It will therefore have an extraterritorial reach, like the GDPR. There is no grace period for compliance.
Most of the controls outlined in the regulation will apply to individuals or organizations that develop an AI system, or has it developed, for use by people in the EU for free or at cost. Importers, distributors and users will have their own set of requirements to comply with.
The regulation will require full transparency about the product and the data involved, and data used must be accurate and be limited to what is needed. The data must be secured and there must be full accountability. AI must be thoroughly tested and documented (and audited), and there must be data governance and human oversight in place at all times.
The European Commission urges companies to begin auditing their AI now ahead of the regulation to fix existing systems. DLA Piper, for instance, offers an AI Scorebox for organizational stakeholders to fill out to help determine if there could be gaps in current systems.
Similar to the launch of the GDPR and paving the way for other countries to enforce robust privacy rules and regulations, the European Union’s proposed Artificial Intelligence Act will set the bar high for AI compliance and pave the way for other countries to enforce similar regulations in the future.
For this reason, all stakeholders must prepare compliance measures in response to AI law development and continue to consider data protection issues to protect personal data throughout AI-assisted projects. Organizations that want to deploy AI need to do so in a responsible and ethical manner. Using a model governance framework is a good way to begin that journey.