top of page
Search

The Philippines’ AI Rulebook (So Far): Laws, New Guidelines, Sector Rules, and What’s Next (Updated Sept 16, 2025)

  • Writer: Eric Dela Cruz
    Eric Dela Cruz
  • Oct 1
  • 7 min read


Artificial Intelligence (AI) is no longer a futuristic concept — it is embedded in our everyday lives. Students use AI to enhance research, professionals leverage it to streamline workflows, and creatives employ it to bring bold ideas to life. But as AI becomes more powerful, it also raises pressing questions about privacy, accountability, and ethical use. This article aims to provide organizations, innovators, and compliance teams with a clear baseline to build responsible and lawful AI systems.





Bottom line: There’s no single, comprehensive AI law in force in the Philippines yet. But there are enforceable rules that apply to AI development and use today, led by the Data Privacy Act (DPA) and the National Privacy Commission’s (NPC) 2024 Advisory on AI, plus e-commerce rules, financial-sector model risk proposals, crypto-asset regulations, policy direction from the Supreme Court, and national-security guidance. Congress also has multiple AI bills pending.


Data Privacy

The core privacy law that applies to AI right now is the Data Privacy Act of 2012 (RA 10173) and its Implementing Rules and Regulations. This law is technology-neutral and applies to any processing of personal data, including data used to train, test, and deploy AI systems. The NPC Advisory 2024-04 reminds personal information controllers to ensure accuracy, fairness, transparency, and purpose limitation when processing data through AI workflows. Organizations must disclose the purpose, explain the extent to which AI impacts data, and make such information easily accessible to data subjects.

NPC Advisory Opinion No. 2024-002 (January 2024) further clarifies that the use of AI is permissible under the Data Privacy Act of 2012 (RA 10173), provided that Personal Information Controllers (PICs) continue to comply with the Act’s general privacy principles. PICs using AI must:

  • Ensure lawful basis for processing personal data, regardless of whether AI technology is used.

  • Uphold transparency and data subject rights (right to be informed, right to rectification) by providing adequate information and accessible mechanisms. 

  • Conduct a Privacy Impact Assessment (PIA) to evaluate whether AI use is fair, proportional, and necessary considering potential risks to data subjects.

  • Implement reasonable safeguards to protect personal data when using AI tools such as ChatGPT or other generative models.


NPC emphasized that there is “no manifest conflict” between AI use and the DPA, but accountability for AI-driven processing remains with the organization.

NPC Advisory No. 2024-03 (Guidelines on Child-Oriented Transparency) goes further by requiring additional safeguards when AI systems process or are likely to process children’s personal data. Key obligations include:

  • Conducting Child Privacy Impact Assessments (CPIAs) before launching products or services likely to be accessed by children, and updating these assessments as products evolve.

  • Implementing high-privacy settings by default, such as disabling geolocation and setting profiles to private.

  • Providing child-friendly privacy notices that use plain language, are layered for clarity, and can be delivered in alternative formats (videos, infographics, audio) for comprehension. 

  • Engaging parents or guardians when risks are high and verifying their involvement through appropriate methods.

  • Avoiding deceptive design patterns that could nudge children into sharing more data than necessary.


These measures should be applied to any AI system targeting or likely to be used by minors


E-Commerce & Consumer Protection: Duties for AI-Enabled Platforms

With the rise of digital commerce, AI-enabled platforms face heightened obligations under the Internet Transactions Act of 2023 (RA 11967) and its IRR. These rules cover business-to-business and business-to-consumer transactions, requiring platforms and merchants to be transparent, register when required, and implement takedown procedures for scams and unsafe products. AI-powered chatbots, recommendation engines, and ad-tech systems must be fair, accurate, and explainable to consumers.


Cross-Cutting Laws & Cybercrime Risks

AI tools may inadvertently create exposure under other laws. The Cybercrime Prevention Act (RA 10175) penalizes online fraud, identity theft, and hacking — risks amplified by generative AI misuse. The Electronic Commerce Act (RA 8792) protects the integrity of e-documents and signatures, which AI systems must not forge or falsify. The Consumer Act (RA 7394) prohibits deceptive or unfair trade practices, including misleading AI-generated ads or fake reviews. Likewise, the Intellectual Property Code (RA 8293) may be triggered when AI systems generate works that infringe on existing IP rights.


Digital Assets & Crypto Rules

The Philippines has seen rapid growth in crypto-assets, with AI playing a dual role as both risk vector and security solution. The SEC Memorandum Circular Nos. 4 & 5 (2025) set out the Crypto-Asset Service Provider (CASP) Rules and Guidelines, requiring registration, capital adequacy, and risk monitoring. CASPs are encouraged to deploy AI tools for fraud detection, anti-money laundering monitoring, and market integrity surveillance.


Courts & Legal Profession: AI in Justice

The judiciary is also embracing AI. The Supreme Court’s Strategic Plan for Judicial Innovations (SPJI) 2022–2027 includes AI research to improve court efficiency. In November 2024, Senior Associate Justice Marvic M.V.F.  Leonen shared that the Supreme Court was drafting an AI Governance Framework for the Judiciary to ensure ethical and responsible use. In May 2025, Chief Justice Alexander G. Gesmundo shared the positive results of the pilot implementation of an artificial intelligence (AI)-powered voice-to-text transcription tool (Scriptix). Chief Justice Gesmundo emphasized that AI will support — not replace — human court stenographers.


However, courts have warned against misuse. The Sandiganbayan admonished a lawyer for submitting AI-generated pleadings with fictitious citations, citing the Code of Professional Responsibility and Accountability (CPRA) which forbids misleading courts.


National AI Strategy & Policy Signals

In July 2024, the Department of Trade and Industry (DTI) launched the National AI Strategy Roadmap 2.0, aiming to position the Philippines as a leading AI research hub. The plan promotes industry adoption, workforce upskilling, and responsible use guided by principles of transparency, fairness, and accountability. In parallel, a memorandum was issued by Defense Secretary Teodoro cautioning AFP and DND personnel against AI-generated photos and apps that pose identity theft and phishing risks.


Proposed AI-Specific Legislation

Lawmakers have filed several measures to create a comprehensive AI regulatory framework. These include:

  • House Bill No. 7396 (AIDA): Establishes an Artificial Intelligence Development Authority to issue ethical guidelines and oversee AI development.

  • House Bill No. 3195 and Senate Bill No. 852: Creates the Philippine Council on Artificial Intelligence and a National Center for AI Research; propose an AI Bill of Rights to protect citizens and workers.

  • Senate Bill No. 25: Requires a national registry for AI systems, meaning tools must be cleared before deployment.

  • House Bill No. 3214 (Deepfake Regulation Act): Penalizes non-consensual deepfake creation with imprisonment (2–5 years) and fines (₱50,000–₱200,000), and grants victims a right to damages.


Together, these measures promise to establish clear guardrails for safe, ethical, and accountable AI.


Practical Compliance Checklist 

The checklist below translates key legal requirements and policy guidance into practical actions for AI development teams and compliance officers: 

  • Data Mapping: Inventory personal data, identify lawful basis under RA 10173, and document data flows using privacy-by-design principles. 

  • AI Compliance: Align AI deployments with NPC Advisory Opinion No. 2024-002 by confirming lawful basis, conducting a PIA, and enabling data subject rights before production rollout.

  • Transparency: Disclose AI use to data subjects.

  • Child Data Safeguards: For AI systems likely to be accessed by minors, conduct a Child PIA (CPIA), use high-privacy defaults, provide child-friendly layered notices, and involve parents or guardians as required by NPC Advisory 2024-03.

  • Model Governance: Test for bias, ensure human-in-the-loop controls for high-impact decisions, and align with BSP’s draft Model Risk Management guidelines.

  • E-Commerce Compliance: Register platforms, implement takedown processes, and follow RA 11967 IRR codes of conduct.

  • Cybersecurity: Monitor for cybercrime risks, maintain incident response workflows, and comply with breach notification requirements.

  • Vendor & Security Controls: Conduct DPIAs, enforce access controls, and include data protection clauses in contracts.

  • Litigation Readiness: Verify all citations and avoid unvetted AI-generated pleadings, as reminded by the Sandiganbayan.


Things to Watch

The Philippine regulatory landscape for AI is evolving rapidly, and several key developments are on the horizon:

  • BSP’s upcoming Model Risk Management Circulars and potential AI ethics guidance.

  • The NPC’s Advisory Opinion No. 2024-002 signals the Commission’s proactive stance on AI. Expect more sector-specific guidance and possible updates to PIA templates to address algorithmic decision-making and generative AI use cases.

  • Supreme Court’s forthcoming AI Governance Framework for judiciary operations.

  • The NPC is also moving toward stronger protection for vulnerable groups. Its 2024-03 Guidelines on Child-Oriented Transparency set a precedent for child-specific AI compliance and may shape future regulations for “AI for kids” solutions.

  • Congressional progress of HB 7396, HB 3195, SB 25, SB 852, and HB 3214, which could introduce new compliance obligations, oversight bodies, and standards.


Frequently Asked Questions (FAQs)

  1. Is there a single AI law today? No. AI is currently regulated through the DPA, NPC advisories, e-commerce, crypto, and sectoral rules. 

  2. Do these rules apply to public data? Yes, if personal data is involved, DPA obligations still apply. 

  3. What if AI does not process personal data? Other laws (Consumer Act, Cybercrime Act, IP Code, RA 11967) may still apply. 

  4. What should banks do? Prepare for BSP’s MRM guidelines, maintain model inventories, and perform independent validations. 


Practical Insights & Takeaways

Organizations should not wait for the Philippine “AI Act” to be enacted before acting. By building governance programs now, they can reduce risk and build user trust. Key steps include:

  • Maintaining a living AI model inventory

  • Conducting regular validation and bias testing

  • Implementing privacy-by-design across all projects

  • Monitoring legislative and regulatory developments 


Compliance should be seen not as a cost, but as a strategic investment that positions the organization as a leader in responsible and ethical AI.


Sources:

 
 
 

©2020 by Belgica Aranas Baldueza Ng Dela Cruz & Associates

Law Offices.

bottom of page