ARTICLE AD BOX
Last Updated:March 09, 2026, 13:52 IST
Designed to function independently once activated, these drones perform three key battlefield tasks on their own; locating targets, identifying them, and launching an attack

Israel's Harop drone can loiter for hours before striking enemy radar installations. (AI Image)
What once belonged to the imagination of science fiction films is now rapidly becoming a reality on modern battlefields. So-called “killer robots", formally known as Lethal Autonomous Weapon Systems (LAWS), are emerging as a transformative, and controversial, development in warfare. These weapons are capable of identifying, selecting and attacking targets without direct human control, triggering an intense international debate over both their technological implications and ethical consequences.
At the centre of the controversy is the idea that machines may soon be making life-and-death decisions. More than 30 countries have called for a complete ban on such systems, arguing that the risks posed by autonomous weapons go far beyond technological concerns and raise profound moral questions.
Autonomous combat drones represent one of the most prominent forms of these systems. Designed to function independently once activated, they perform three key battlefield tasks on their own; locating targets, identifying them, and launching an attack. While humans may initiate their deployment, subsequent decisions are made by onboard algorithms powered by artificial intelligence.
These weapons are generally classified into three categories based on the level of human control involved. In the first category, the system requires explicit human approval before it can engage a target. In the second, the machine can make the decision to attack, though a human operator retains the ability to intervene and halt the strike. The third category, considered the most controversial, operates entirely without human oversight. These fully autonomous systems represent the true concept of LAWS.
The effectiveness of such weapons lies in their advanced artificial intelligence and sensor technologies. Autonomous drones are equipped with multiple sensors that continuously monitor their surroundings. LiDAR systems use laser pulses to generate detailed 3D maps of terrain, while thermal cameras can detect human body heat even in darkness. Radar systems allow the drone to detect movement and activity over long distances. By combining data from these sensors, the drone builds a comprehensive real-time understanding of its environment.
This information is then processed by AI-driven deep learning systems trained on vast datasets containing millions of images and scenarios. Within milliseconds, the system analyses visual data to determine whether a detected object is a soldier, a military vehicle or a civilian presence.
The most contentious stage follows, i.e. the decision to attack. Known as the threat-assessment algorithm, this process evaluates factors such as the perceived danger posed by the target, the military value of the objective and the presence of nearby individuals. Critics argue that, unlike human decision-making, such calculations do not incorporate ethical judgement.
Autonomous drones also rely on sophisticated navigation technologies. Using GPS and systems such as Simultaneous Localisation and Mapping (SLAM), they can map unfamiliar environments and chart their own paths. Even if GPS signals are jammed during electronic warfare, these drones can continue navigating using onboard sensors.
In some cases, autonomous drones are deployed collectively through swarm technology. Multiple drones operate as a network, sharing data instantly with one another. Information detected by one drone can be transmitted to the entire swarm, allowing coordinated actions across dozens or even hundreds of machines.
Many of these systems employ loitering munitions, drones that can hover over an area for extended periods before diving into their target and detonating. Others are designed to launch missiles or bombs, while some are capable of electronic warfare operations such as jamming enemy communication or radar systems.
Several countries have already developed or deployed such systems. Israel’s Harop drone, for instance, can loiter for hours before striking enemy radar installations. Russia’s Kalashnikov ZALA KYB is another autonomous attack drone designed for precision strikes. The United States and Australia have jointly developed the Boeing MQ-28 Ghost Bat, a combat drone intended to operate alongside manned fighter aircraft while making autonomous tactical decisions. Turkey’s Kargu-2 drone has also drawn international scrutiny.
According to a United Nations report, a significant milestone, and warning, occurred in 2021 when an autonomous drone reportedly attacked individuals in Libya without direct human command, marking what may have been the first instance of such a weapon acting independently in combat.
Despite their military advantages, these systems raise serious concerns. Artificial intelligence is not immune to error. A misidentified target could result in civilians being mistaken for combatants, leading to devastating consequences. Cybersecurity also poses a major risk. If hostile actors were able to hack these systems, autonomous weapons could potentially be turned against their own operators.
Swarm attacks represent another challenge. Defending against hundreds of coordinated drones simultaneously could overwhelm conventional air-defence systems.
Beyond the technological risks lie deeper ethical questions. Should a machine be allowed to decide who lives and who dies? If an autonomous weapon makes a fatal mistake, who bears responsibility, the programmer who wrote the code, the military commander who deployed it, or the company that manufactured the system?
Such questions have fuelled growing international pressure for regulation. More than 30 countries, including Austria and New Zealand, have called at the United Nations for a complete ban on lethal autonomous weapons.
However, major military powers including the United States, Russia, China and Israel have opposed such restrictions, arguing that autonomous systems could provide a crucial strategic advantage in future conflicts.
First Published:
March 09, 2026, 13:52 IST
News world The 'Killer Robot' Weapon That More Than 30 Countries Want To Ban. Is India Among Them?
Disclaimer: Comments reflect users’ views, not News18’s. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.
Read More
1 hour ago
4





English (US) ·