The growing
integration of artificial intelligence across sectors such as finance,
healthcare and customer service has raised global concerns about data privacy
and security. AI-driven organizations routinely engage in complex data
processing activities that can significantly impact the rights and freedoms of
data subjects, particularly in jurisdictions with emerging data protection
frameworks. While international data protection instruments emphasize the
importance of Data Protection Impact Assessments (DPIAs) for high-risk
processing, many countries, including Nigeria, India and Brazil, struggle to
clearly define what constitutes ‘high-risk’ processing or to provide actionable
guidance for AI applications. This paper examines this regulatory gap,
comparing it with approaches in the United Kingdom (UK) and Hong Kong, where
clear regulatory guidelines have been issued. It argues that the absence of
explicit classifications and mandatory DPIA requirements for AI-related
processing hinders compliance and weakens the protection of data subject
rights. The paper recommends that jurisdictions with these gaps issue regulatory
guidance explicitly designating AI-related processing as high-risk and
mandating DPIAs.