Blog Post: Building the Bridge to the Autonomous Sky: How the FAA Can Unlock Drones for the Public Good

Author: Demetrius Hernandez

USE THIS IMAGE FOR MAIN PAGE TILE (centered). Tech Ethics Lab Blog, a collaboration between IBM and the University of Notre Dame.

The Federal Aviation Administration (FAA) recently released its long-awaited proposed rule to standardize how drones can fly beyond a pilot’s line of sight, opening a critical window for public comment. The United State’s proposal is a foundational roadmap for a new era of aviation that stands to shape everything from automated logistics to how first responders can use drones in an emergency. This U.S. decision comes at a time when governments worldwide are crafting norms for AI-driven drones, from Europe’s U-Space Framework to Canada’s Remotely Piloted Aircraft Systems (RPAS) reforms. Through our ongoing project with the ND-IBM Tech Ethics Lab, we study how regulatory choices like these influence the responsible deployment of AI-enabled drones in high-stakes public service contexts. Together, with the community of innovators and public-interest technologists, we now have a remarkable opportunity to help build the best bridge to a future that unlocks the full spectrum of drone applications serving the common good.

The Proposal at a Glance

The proposed rule envisions a future built around highly automated operations, with two main authorization paths: permits for lower-risk activities such as agriculture, aerial surveying or civic missions, and certificates for higher-risk or larger scale operations. Instead of a single pilot in charge, operators must now assign two roles, an operations supervisor to ensure overall compliance and safety, and flight coordinator to monitor specific missions. Risk is also scaled through five population density categories, which determine how much oversight is required as flights move from rural areas into more crowded environments. Together, these measures aim to provide a flexible but structured framework: lighter oversight for routine, lower-risk missions, and more rigorous safeguards where operations grow in scale or complexity.

The Missing Piece: Human-Machine Teaming

The proposal's biggest gap is its assumption of near-total autonomy. By replacing the traditional “Remote Pilot in Command” with more supervisory roles, the regulation overlooks the reality of how most public-interest missions work. Imagine a drone supporting firefighters during a wildfire. The mission is a dynamic collaboration: the drone flies autonomously along a pre-set grid, but a human operator in the loop makes critical adjustments in real-time based on external factors and instructions from the crew on the ground. This is human-machine teaming in action. Effective governance isn’t about removing the human; it’s about defining the role within a complex system, which the current proposal doesn't capture.

A Human Path to an Autonomous Future

This is precisely where our work at the ND-IBM Tech Ethics Lab offers a blueprint. In a recent collaboration with the MIT Science Policy Review, we showed that governance cannot treat all autonomous missions as the same. The work demonstrates how operations shift between phases of routine automation and periods where human judgment is indispensable. This insight reframes the policy challenge because the key is designing frameworks that flexibly allocate responsibility across humans and machines (a principle that applies well beyond drones to healthcare, transportation, climate monitoring, etc.). Applied here, it offers a way to unlock drones for a public benefit while accelerating progress toward global goals; promoting sustainable agriculture (SDG 2), building resilient infrastructure (SDG 9), and monitoring ecosystems and supporting climate action (SDG 13). For researchers, civic actors, and policymakers, this isn’t just about aviation law, it sets the groundwork for how autonomous systems are ethically and safely deployed across society.

Our Call to Action: Let’s Build the Bridge

The future enabled by this rule is one of incredible promise, where drones help grow our food more sustainably, keep our infrastructure resilient, and protect our communities. Because the framework established now will shape the future of the innovation landscape for years to come, this rulemaking period is a critical moment to get the details right. This is why our research lab submitted a formal comment to the FAA. We will also host a series of workshops in the coming months on building ethical drone deployment, creating a space for practitioners, regulators, and civic groups to shape best practices together. Our goal is to engage with the research community to create a pathway that unleashes large-scale commercial innovation while also empowering the local, civic, and scientific applications of AI-enabled drones that deliver immediate and lasting public value.

Learn More

Read the team’s comment on the BVLOS NPRM for additional insights, and stay tuned for a forthcoming interview and policy brief in collaboration with MIT. You can also watch this short overview of Demetrius's work presented recently (and awarded best poster presentation) at the RISE AI Conference at the University of Notre Dame.



Demetrius Hernandez is a second-year Ph.D. candidate in computer science and engineering at the University of Notre Dame, where his research focuses on autonomous drones for emergency response. He is a core member of the ND-IBM Tech Ethics Lab–funded project Enhancing Human-AI Collaboration and Policy in Emergency Response: Ethical Deployment of AI-Enabled Drones, alongside Jane Cleland-Huang (University of Notre Dame), Ricardo Morales (Brown University), Kaitlin Harris (U.S. Air Force/SAF/AQRE), and Tristian Hernandez (U.S. Air Force). Before beginning his doctoral studies, Demetrius worked as a computer s+cientist on the Counter Drone Team at White Sands Missile Range in New Mexico.