To Fear or Not to Fear: India as a Nuclear Power

HISTORY

     In 1939, on a visit to India, Bhabha realized the lack of facilities to run nuclear research in India and decided to build an institute for nuclear research along with his good friend J. R. D. Tata. With a total of 1240 US dollars in funds from the Sir Dorabji Tata Trust, the Government of Bombay and the Council of Scientific and Industrial Research, Bhabha commenced work in the Tata Institute of Fundamental Research (TIFR) on June 1st, 1945. Bhabha’s initial plan for nuclear power was not to build weapons for war. He said, “When nuclear energy has been successfully applied for power production in, say, a couple of decades from now, India will not have to look abroad for its experts but will find them ready at hand.  Nuclear research in the TIFR started in the early 1950’s with the one-million volt Cockroft-Walton accelerator. Initial experiments in TIFR led first by Bhabha and then by Bernard Peters was devoted to cosmic ray research and balloon-based experiments. Around the same time Bhabha simultaneously started the Atomic Energy Commission (AEC), which was a policy-making group that also initiated surveys of India’s natural resources, especially uranium and thorium-bearing minerals. In 1955 the AEC went for a joint venture with the United Kingdom, to build a Swimming Pool Reactor called Apsara. The Apsara reached criticality on August 4, 1956. The Apsara was followed by a series of developments. India collaborated with Canada to build the Canadian-Indian Reactor (CIR), inspired by the NRX on Chalk River, which gained criticality in July 1960. Later the USA joined hands with Indians to modify the CIR reactor, now known as the CIRUS reactor. The Trombay center built a plant to process crude thorium hydroxide, obtaining uranium fluoride as a by product. TIFR was recognized as the National Center of the Government of India for Advanced Study and Fundamental Research in Nuclear Science and Mathematics in 1955. Since then TIFR has expanded its research to include fields like theoretical and nuclear physics, condensed matter physics, computer science, molecular biology and radio astronomy.

INDIAN NUCLEAR WEAPONS PROGRAMME

     The nuclear movement started by Homi J. Bhabha later inspired the use of nuclear power to build weapons for protection of the nation. This was predicted in June 1946 by Jawaharlal Nehru, “As long as the world is constituted as it is, every country will have to devise and use the latest devices for its protection. I have no doubt India will develop her scientific researches and I hope Indian scientists will use the atomic force for constructive purposes. But if India is threatened, she will inevitably try to defend herself by all means at her disposal.” The Indian Nuclear Programme started as a three-stage effort. In stage 1 natural uranium fuelled pressurised heavy water reactors (PHWR) produce electricity while generating plutonium-239 as by-product.  Fast breeder reactors (FBRs) in stage 2 uses a mixed oxide (MOX) fuel made from plutonium-239 recovered by reprocessing spent fuel from the first stage, and natural uranium. Stage 3 consists of an Advanced Nuclear power system with a self-sustaining series of thorium-232-uranium-233 fuelled reactors. After India’s loss to China in a brief Himalayan border war in October 1962, the New Delhi government decided to use nuclear power to develop weapons as a way to deter Chinese aggression. The first nuclear test, known to the world as Smiling Buddha, took place in 1974 in Pokhran in Thar desert. This 8 kiloton weapon created a crater of 47 meter radius and 10 meter depth. In 1998, together the Bhabha Atomic Research Center (BAARC) and the Defence Research and Development Organisation (DRDO) launched the second nuclear weapons test called Pokhran II. A total of five devices were tested, three on 11th May and two on 13th May. The three devices on 11th May included a 45 kT thermonuclear warhead using nuclear fusion, a pure fission device of 12 kT yield designed to be dropped from an aircraft and an experimental fission device of 0.3 kT yield. The two devices on 13th May were a 0.5 kT and 0.3 kT experimental fission device each. Since then India have evolved and has managed to build over 120 warheads.

NUCLEAR ARSENAL

Expected to have enough material to build approximately 2100 warheads, India currently has a nuclear triad of weapons; air-launched nuclear weapons, land-based ballistic missiles and sea-based ballistic missiles. All India’s weapons are plutonium-based made from weapons-grade plutonium produced in the CIRUS reactor or the Dhruva heavy-water reactor. Currently India has around 25 nuclear reactors. India has four main delivery systems for their nuclear weapons: Ballistic missiles, submarines, cruise missiles and strategic bombers. Most of India’s ballistic missiles are developed as part of it ambitious Integrated Guided Missile Development Programme (IGMDP), managed by the DRDO. The Indian Armed Services (IAS) deploys three kinds of nuclear-capable ballistic missiles under the control of its Strategic Forces Command (SFC): the short, medium and intermediate range ballistic missiles. The short range missiles have a range of less than 1000 km and includes the Prithvi series, the Prahaar, the Dhanush and the Shaurya.

 

Missile Description Range
Prithvi-I Originally built for small warheads. Uncertain if it is nuclear capable or conventional 150 km
Prahaar Currently under development. Carries a nuclear or conventional payload. 150 km
Prithvi-II Failed most of its originals tests but since 2016 the weapon has been deemed successful. 250-350 km
Prithvi-III Once development is complete, it will be able to carry a single nuclear or conventional warhead. 350+ km
Dhanush Liquid-fuelled and ship-launched, first successfully tested on Oct. 5, 2012. 350+ km
Shaurya Hypersonic land-based variant of the nuclear-capable K-15 submarine-launched ballistic missile; can carry a single conventional or nuclear warhead 700+ km

 

The medium range ballistic missiles range between 1000 to 3000 km and can usually carry a single nuclear or conventional warhead. They Agni series is a part of medium range ballistic missiles.

 

Agni-I The range can be extended by reducing payload 700-1200 km
Agni-II Unclear operational status, last tested in April 2013. 2000+ km

 

The Agni series also two intermediate range ballistic missiles with range between 3000 to 5500 km and two intercontinental ballistic missiles with range greater than 5500 km.

 

Agni-III Introduced in military service in 2012, fewer than 10 launchers. 3200 km
Agni-IV Road- and rail-mobile missile, most recent successful test in January 2017. 4000 km
Agni-V Under development, believed to not have the capability to carry multiple independently targetable reentry vehicle (MIRV) warheads 5200 km
Agni-VI Nuclear-capable ICBM under development, may be armed with MIRVs. 10,000 km

 

The Indian Naval Service has two sea-based delivery systems for nuclear weapons: a submarine-launched system and a ship-launched system. India’s first missile submarine is the INS Arihant which became fully operational in 2016. In November 2017 India deployed its second Arihant-class submarine, the Arighat. Currently DRDO has a high priority project where India is developing its SLBM capabilities with its K-series missiles.

 

K-15 Sagarika is under development, has no MIRV capabilities 700 km
K-4 Under development, carries conventional or nuclear payloads 3500 km
K-5 Under development, Capacity to carry 4 MIRVs >6000 km

 

India also has three cruise missiles: the BrahMos, the Brahmos-II and the Nirbhay. The BrahMos is a nuclear-capable land-attack cruise missile jointly developed between Russia and India. With a flight range between 300 to 500 km, it is one of the very few missiles that are capable of being launched from land-based, ship-based, submarine-based, and now air-launched systems. The Brahmos-II is a hypersonic version of the BrahMos that is currently under development. After India’s induction into the MTCR in June 2016, the range of the missiles have been increased from 290 km to 600 km. The Nirbhay is a nuclear-capable land-attack cruise missile under development with an estimated range of 800-1,000 km.

Apart from the above mentioned missiles, India has a bunch of nuclear-capable strategic bombers. A French plane, the Mirage 2000H can be used to deliver gravity-based nuclear bombs. The Jaguar IS fighter bombers have been modified to  deliver nuclear payloads. After June 2016, the Russian aircraft Sukhoi-30 MKI completed its first flight equipped with the nuclear-capable BrahMos and an additional 40 such aircrafts are expected to be modified to carry the BrahMos. Lastly, India plans on upgrading its aging air force with newer aircraft like the French Rafale fighter jet, that can potentially take over the air-based nuclear strike role.

SHOULD YOU FEAR THE INDIAN NUCLEAR PROGRAM?

Based on the World Nuclear Industry Status Report of 2017, India is third in the world, in terms of number of nuclear reactors. The Indian Nuclear Doctrine is largely based on an unofficial document released in 1999 by the National Security Advisory Board which outlines the deployment of the nuclear triad- aircraft, mobile land-based missiles and sea-based assets, designed for “punitive retaliation.” According to Indian officials, the large size of our nuclear stockpile is simply for maintaining a “credible minimum deterrence” with abilities which enable an “adequate retaliatory capability should deterrence fail.” The draft nuclear doctrine described the first use of nuclear weapons as “constituting a threat to peace, stability and sovereignty of states.” In January 2003 India reiterated its loosely defined nuclear doctrine to add the “No First-Use” policy where it would not use nuclear weapons except to retaliate against a nuclear attack. Additionally, the Government of India reserved rights to use nuclear weapons in response to biological or chemical weapon attacks. In 2010, the then national security advisor Shivshankar Menon described India’s nuclear doctrine as “no first use against non-nuclear weapon states, which implied that a first use by India of a nuclear weapons was possible against another nuclear-armed competitor.

Currently India operates seven nuclear-capable systems: two aircraft, four land-based ballistic missiles, and one sea-based ballistic missile and four additional systems are in dynamic phase development. The long-range land and sea-based missiles will possibly be deployed within the next decade. Apart from existing warheads India has been estimated to have produced 600 kilograms of weapons grade plutonium enough to build 150-200 nuclear warheads. Most of the plutonium is generated from the 40 MWt CIRUS and the 100 MWt Dhruva, which began operations in 1963 and 1988, respectively. Based on the capacity factor and operating availability, the CIRUS reactor was estimated to produce 4 to 7 kg of weapons-grade plutonium annually; the corresponding figure for the Dhruva reactor is 11 to 18 kg. Despite all the recent expansions in India’s nuclear stockpile, their retaliation time to attacks is extremely poor. India stores its nuclear warheads in a disassembled state, keeping the fissile core separate from the warhead package, which greatly increases the time required to deploy weapons.

India has multiple nuclear cooperation agreements with different countries like the U.S., the U.K., Russia, France, Namibia, South Korea, Mongolia, Canada, Argentina, Kazakhstan, and Japan. India has been in a non-attack agreement policy with Pakistan since january 1991. India is currently emphasizing on future strategic relationships with China. Although India has been a part of all these agreements, they have refused to be a part of the two most important nuclear agreements: The Comprehensive Nuclear Test-Ban Treaty and the Nuclear Non-Proliferation Treaty. India actively participates in international nuclear trades. In 2013, India signed a bilateral safeguard agreement with Canada, for trade in nuclear materials and technology used in IAEA safeguard facilities, which led to Canada eventually agreeing to a five year deal to supply India with uranium to fuel civilian nuclear reactors. In 2014 Australian Prime Minister Tony Abbott and Indian PM Narendra Modi signed a nuclear cooperation agreement allowing Australia to export uranium for India’s civil nuclear program. In January of 2015, India and the United States released a joint statement announcing that the two nations will work towards India’s phased entry into the Nuclear Suppliers Group (NSG), the Missile Technology Control Regime (MTCR), the Wassenaar Arrangement, and the Australia Group. After having multiple joint discussions with different nations India has started to focus more on using nuclear energy as a renewable form of energy. 27 years of failed experiments have finally resulted in India coming up with a Fast Breeder Reactor. Fast breeder reactors differ from conventional nuclear plants as the neutrons that sustain the atomic chain reaction travel at higher velocities. This type of reactor is capable of generating more fuel that it consumes, a behavior typically made possible by elemental uranium. India hopes to use the excess fuel for commercial purposes.

Unit 731: Imperial Japan’s Biological and Chemical Warfare

Written by Romeo Jung.

http://en.people.cn/n3/2017/0815/c90000-9255707.html

Introduction

Unit 731 was a secret Biological and Chemical Warfare Unit that Imperial Japan had established during the World War II. Eager to win the war, the scientists involved committed a lot of inhumane crimes like vivisection to Chinese, Korean, Russian, and Mongolian prisoners of war, and used the data gained to harm many Chinese civilians. This essay details heavily on the biological research and its data from start to the end as well as their impacts and aftermath.

Background

Unit 731 was established first in 1932 as a small group of five scientists interested in biological weapons, and was expanded around 1936 when Shiro Ishii was given full command of the unit. Given alternative names like “lumber yard” and “Epidemic Prevention and Water Purification Department of the Kwantung Army”, the name “Unit 731” was made formal in 1941. The lab was based at the Epidemic Prevention Research Laboratory in Japanese Army Military Medical School in Tokyo. Their purpose was none of the given names, but biological and chemical warfare research.

The idea of Unit 731 first circulated around by a memo written in April 23, 1936, that speaks about the establishment of reinforcement military forces in Manchuria. The memo states that there would be a new “Kwantung Army Epidemic Prevention Department” and that it shall be expanded later on. 

The headquarters was set in three square kilometers of land in Pingfang district, Manchuria. Many of the lab’s buildings inside were hidden by a tall wall and high voltage wired fences. The lab had around 150 buildings, including incinerator, housing for prisoners, an animal house, and air field. The buildings were completely isolated from the outside world, with only a tunnel as the entrance.

Unit 731, along with two other units to be mentioned later, was created in opposition to the Geneva protocol of 1925 banning biological and chemical warfare. This protocol was signed at June 17, 1925 in Geneva. It became effective from February 8th, 1928, and got registered by League of Nations Treaty Series on September 7, 1929.

Divisions

Within Unit 731, there were eight subunits designed to focus on different topics of warfare. The first division focused on biological weapons like bubonic plague, cholera, anthrax, typhoid, and tuberculosis, with human subjects to work with. The second division focused on effectively spreading the biological weapons covered in the first division. The third division was focused on a specific way of spreading biological agents by bomb, the fourth on bacteria mass production and storage. The fifth through eighth divisions were mostly focused on the supplying the rest of the Unit, which included training workers, providing equipment, and overall administrative units.

Outside of Unit 731, Japan established two departments: Unit 100 and Unit 516. Unit 100 was first declared as the “Kwantung Army Military Horse Epidemic Prevention Workshop,” which focused on developing biological weapons aside from Unit 731. “Kwantung Army Technical Testing Department”, later called Unit 516, was also established for more research that focused on chemical weapons. 

People Involved

There were many involved with the research of Unit 731, most of them remaining anonymous to this day. Shiro Ishii was the Chief of Unit 731, with Masaji Kitano as second in command. Other scientists were most likely to be a Professor at an university or a chief of a medical research unit, like Dr. Hisato Yoshimura, who directed the frostbite experiments on subjects, and Dr. Hideo Futaki, who lead the tuberculosis research squad and some vivisections. Other personnels include Lieutenant Shunichi Suzuki, who, after the trials, went to work as the Governor of Tokyo, and Amitani Shogo, who remained at the lab afterwards and received the Asahi Prize for outstanding scientific performance.

Shiro Ishii served in the Imperial Japanese Army from 1921 to 1945, and in the meantime, he was a Japanese army medical officer, microbiologist, and the director of Unit 731. Before serving in the army, he had studied medicine at Kyoto Imperial University. He was first assigned as an army surgeon, then to the First Army Hospital and Army Medical School in Tokyo. His work soon impressed the superiors, which earned him postgraduate level medical education. Ishii was promoted to an army surgeon in 1925, and was advocating for a biological weapons research program.

After getting promoted to higher ranks, Ishii began his experiments in Zhongma Fortress for biological weapons. Then the government granted him permission to set up Unit 731 in his hopes of digging deeper into the topic. After World War II, he was arrested for a short time by the US occupation authorities for Unit 731, then received immunity from consequences in exchange for data. There are different accounts as to what he did after that, but some say that he traveled around to give talks about biological weapons and others say that he stayed in Japan to provide medical services for free.

What They Did

In Unit 731, the first division conducted many outrageous experiments which were violating human rights. They conducted many experiments that tested the limitations of the human body. The prisoners, used as subjects, were of mixed ethnicity and gender, some pregnant, and some as young as three years old. The prisoners, tied to stakes, would have to endure the biological agent bombs that carried plague infested fleas on them or rats with the diseases. Then they were subject to their body being cut open with a scalpel and examined while they were screaming for mercy on the table. 

An unnamed Unit 731 surgeon, in an interview with New York Times, described his experience with the unit. His first vivisection, which he recalled that he “cut [the prisoner] open from the chest to the stomach, and he screamed terribly, and his face was all twisted in agony… …finally he stopped. This was all in a day’s work for the surgeons…” (Kristof) There was no use of anesthetics during vivisections at all because they were afraid that it would have an effect on the results and data.

In another part of his article, Kristof interviews a former medical worker in Unit 731, Takeo Wano. Wano says that he had seen “six-foot-high glass jar in which a Western man was pickled in formaldehyde. The man had been cut into two pieces, vertically.” There were many other jars in the headquarters of Unit 731 containing other body parts from different people, labeled often as their ethnicity. An anonymous Unit 731 veteran says that most of the jars had been noted as Chinese, Korean, and Mongolian, although there were occasionally American, English, and French. Some body parts were even sent in from other places.

Other experiments included prisoners being locked inside a pressure chamber to test how much pressure the body can handle before their eyes started popping out, being exposed to poisonous gas and many more biological and chemical weapons, having limbs cut off for studying blood loss, having cut off limbs attached to different parts of the body, having horse urine injected into kidneys, and having lethal dosages of x-rays. Kristof noted that “The accounts are wrenching to read even after so much time has passed: a Russian mother and daughter left in a gas chamber, for example, as doctors peered through thick glass and timed their convulsions, watching as the woman sprawled over her child in a futile effort to save her from the gas.”

Hisato Yoshimura, apart from infection based experiments, led the frostbite experiments, which focused on the effects of frostbite on human limbs. He gave orders to freeze limbs of prisoners, often until they were black. The prisoners were let in only when an officer was sure that their limbs were frozen. The officers would test limbs by beating them with a stick, as they knew that frozen limbs sound like wooden boards upon hitting. 

After chilling prisoners’ limbs to near 0 degrees Celsius with ice water, Yoshimura continued to chop off parts of the limb, especially fingers, so that he may record how the frostbite was affecting human limbs. He and his team experimented on subjects as young as three years old, with a needle in their finger to keep it from clenching into a fist. 

Effects During War

The Japanese Military used the biological weapons developed by Unit 731 directly on Chinese civilian population. Agents in divisions other than the first division in Unit 731 would spread the diseases by train, road, and airplanes. Many Chinese civilians developed the worst infections on their limbs, and only a few were exposed to treatment since no local doctors or hospitals had seen the infections before.

Quzhou village, Ya Fan village, and Chong Shan village in the Zhejiang Province had suffered deeply from the Bubonic Plague, as well as Dysentery, Typhoid, Cholera, and many more. In an episode of BBC Correspondent, Wu Shi-Gen, a victim of Unit 731’s biological weapons, tells his story of how the Bubonic Plague had affected his nine-year old brother. The rest of the family chose to lock his little brother away in another room to minimize the possibilities of infections while the little boy cried out from the room. Wu said he still remembers how he could not run in and help his brother when he cried out in pain.

Ya Fan village was affected with an unknown infection, commonly known to residents as “The Rotten Leg Disease.” A victim of this infection describes it as something that “started like an insect bite, then swelling and unbearable pain. Then his flesh started rotting away. Many died of it. Experts say it’s probably Glanders, another of Unit 731’s special recipes. Treatments were ineffectual and cost a fortune.” He stated that while his mother and he both had the disease on their legs, she refused the medicine so that he could have it instead of her. She passed away a few months later.

Aside from negative effects, Unit 731’s research was also used to heal Japanese soldiers with certain conditions. Studying about human conditions like frostbites and different diseases, the doctors could effectively pinpoint medical solutions for their sick soldiers. For instance, the frostbite experiment revealed that putting frozen limbs in water from 100 to 122 degrees Celsius is the best.

Aftermath

As soon as the World War II was over, the scientists at Unit 731’s headquarters started burning the building down, getting rid of all the evidence. When Shiro Ishii and many others were captured by China and sent over to the US for a trial, they had a deal with President MacArthur. He decided to let go of the Unit 731’s scientists free of charge for the war crimes in exchange for their medical research data.

In addition, Japanese government was fairly late to apologize to the rightful victims of Unit 731, while paying war tributes to the dead war criminals of Unit 731. They have been continuously visiting their shrines every year since 2013, offending neighboring countries and victims. Many news articles had been written about it, yet they do not seem to matter to the Japanese government.

Many Japanese scholars also deny the fact that there was ever a Unit 731 and state that the history involving the group is fabricated, although there are plenty of evidences. The Japanese history textbooks do not cover most of Japan’s horrific acts in World War II, leading them to believe that Japan was mostly a victim country rather than hostile like their opponents. By large, the Japanese public has a false sense of history due to the fact that their history textbooks are skewed. 

The former members of Unit 731 seem to have conflicting opinions about the publicity of the topic. Yoshio Shinozuka and some others had gone to give talks and share information about Unit 731, but others like Toshimi Mizobuchi intend to keep the promise to hide the information. A portion of Unit 731 members still hold their annual staff reunion parties hosted by Mizobuchi.

Conclusion

Unit 731 has been one of of the most cruel groups to do human experimentation, yet so few people that I’ve met know about what really happened. Although these inhumane experiments could be defended by saying that they were useful for modern medical science, they were definitely not worth the cost of many civilian lives as well as prisoners’ suffering.


Glossary

Maruta — “Log” in Japanese. Prisoners were often called logs so that they could be experimented on without scientists feeling remorse.

Vivisection — Much like dissection, but with an alive person.

References

Unit 731: Japan’s Biological Warfare Project. (2018). Retrieved March 14, 2018, from https://unit731.org/
Kristof, N. D. (1995, March 17). Unmasking Horror — A special report.; Japan Confronting Gruesome War Atrocity. Retrieved March 24, 2018, from https://www.nytimes.com/1995/03/17/world/unmasking-horror-a-special-report-japan-confronting-gruesome-war-atrocity.html?pagewanted=all
L. (2013, February 11). Unit 731: Japan’s biological force. Retrieved March 24, 2018, from https://www.youtube.com/watch?v=8LfMNX3TsT0
Working, R. (2001, June 5). The trial of Unit 731. Retrieved March 24, 2018, from https://www.japantimes.co.jp/opinion/2001/06/05/commentary/world-commentary/the-trial-of-unit-731/#.WqoQ6z9zJhE
McCurry, J. (2013, December 26). Japan’s Shinzo Abe angers neighbours and US by visiting war dead shrine. Retrieved March 24, 2018, from https://www.theguardian.com/world/2013/dec/26/japan-shinzo-abe-tension-neighbours-shrine
Beijing, S. A. (2014, October 17). China protests at Japanese PM’s latest WW2 shrine tribute. Retrieved March 24, 2018, from https://www.theguardian.com/world/2014/oct/17/china-protests-japan-shinzo-abe-yasukuni-shrine
Japanese PM Abe sends ritual offering to Yasukuni shrine for war dead. (2017, October 17). Retrieved March 24, 2018, from https://www.reuters.com/article/us-japan-yasukuni/japanese-pm-abe-sends-ritual-offering-to-yasukuni-shrine-for-war-dead-idUSKBN1CL355
Abe training jet photo sparks outrage in South Korean media. (2013, May 15). Retrieved March 24, 2018, from http://www.scmp.com/news/asia/article/1238533/abe-training-jet-photo-sparks-outrage-south-korean-media
Tsuneishi, K. (2005, November 24). Unit 731 and the Japanese Imperial Army’s Biological Warfare Program. Retrieved March 24, 2018, from https://apjjf.org/-Tsuneishi-Keiichi/2194/article.html
Pure Evil: Wartime Japanese Doctor Had No Regard for Human Suffering. (2016, June 15). Retrieved March 24, 2018, from https://www.medicalbag.com/despicable-doctors/pure-evil-wartime-japanese-doctor-had-no-regard-for-human-suffering/article/472462/
Tsuchiya, T. (2007, December 16). Retrieved March 24, 2018, from http://www.lit.osaka-cu.ac.jp/user/tsuchiya/gyoseki/presentation/UNESCOkumamoto07.html
Unit 731: One of the Most Terrifying Secrets of the 20th Century. (n.d.). Retrieved March 26, 2018, from https://www.mtholyoke.edu/~kann20c/classweb/dw2/page1.html

The Dangers of the Internet of Things

“Security by design is a mandatory prerequisite to securing the IoT macrocosm, the Dyn attack was just a practice run.”

-James Scott, Institute for Critical Infrastructure Technology

Introduction

With the advent of the Internet of Things in every facet of our existence, our lives have never been better. It has become an important hub, promising a  “smarter life”  by establishing communications between different embedded systems with people. The Internet of Things represents a system which consisting of many different kinds of sensors, used alone or combined together to establish connections between one’s self and the surrounding environment. This new technology is pushing the world towards a more connected state, however, we must not disregard the security hazards that come along. The incredible number of connected devices presents numerous points where a malicious attacker may enter one’s system. If compromised, we may see the greatest leakage of personal and private information in our existence. Although its purpose seems harmless enough, we must acknowledge the danger in the future that hackers have the ability to invade one’s private life through their expansive usage and dependence on the Internet of Things.

Background

Before delving into the dangers that come along with the dependence on the Internet of Things (IoT), one must first understand what they are and do for us.

Sometimes referred to as the Internet of Objects, IoTs promise to bring about a technological revolution to the entire world by connecting many objects together in a seamless experience. Clearly, the Internet has made a monumental impact on communications, business, science, education, and humanity as well, by connecting people from the farthest of places. With the IoTs, the Internet will be further utilized as a means of communications between numerous objects.

Each object should be able to recognize themselves and develop intelligence through the information communicated among themselves. This ideology will help create new technologies and applications to provide services for notifications and entertainment to automation and security. In fact, it is projected that by 2020, tens of billions of devices will be connected to the Internet and 50% of all new businesses will rely on IoTs.

With so many devices on the way, a clear outline was designed such that all devices should be able to communicate with one another. The protocol in which these devices will communicate with one another was established by IBM, known as the Open Systems Interconnection (OSI) model. This describes a stack of seven protocol layers, compared to the 4 used by the TCP/IP model. From the first layer to the last, the layers are represented as Physical, Data Link, Network, Transport, Session, Presentation, and Application. The first two, Physical and Data Link, is concerned with how each device is physically connected to the network via hardware. Network defines how routers deliver packets of data between source and destination hosts while transport focuses on end-to-end communication and provides features including reliability, congestion avoidance, and guarantees that packets will be delivered the same order they were sent. The remaining three layers cover the application-level messaging (ex. HTTP/S).

Furthermore, there are various methods of communication that the IoT network technologies utilize. Each technology has their own advantages and disadvantages, however, the most widely used approaches are also currently cellular, Wi-Fi, and Ethernet. These are mainly aimed at providing low-power, low-cost, and long-range connections (With the exception of Wi-Fi, however, it does provide that highest data throughput of all the current approaches). Additionally, they are often used in large-scale deployments in businesses or education. Other mechanisms include BLE (Bluetooth Low Energy) ZigBee, NFC, and RFID. As these newer designs are improved and optimized, they are planned to supersede the older methods as they will provide higher bandwidth while using significantly less power.

As simple as their purpose may be, there is much more complexity behind IoTs than what a normal consumer realizes. This complexity is important, however, because it is how malicious attackers will exploit security flaws.

Current Problems

With the heavy adoption of IoTs throughout all parts of life, hackers have found more and more loopholes to steal one’s information. The need to provide security for IOT infrastructure is of dire importance. A combination of security flaws, non-updateable software, and ignorant programming all lead to possibilities of huge breaches from the inside. Additionally, IOT devices are generally able to access multiple administrative domains, and access to that would allow attacks to become much more widespread and uncontainable. These devices are appealing as they essentially provide an unguarded entrance towards one’s private information without having to go through the front door.

Often times, corporate greed and ignorance are at fault for security breaches found within IOT appliances. For example, often times the micro-controller within the device will run on older or much simpler software. This is to keep profit margins as high as possible as the process to mass-produce becomes cheaper and less complex. For example, software in routers was found to be running on Linux operating systems, that, on average, were four years old from the time the product was initially released. Whether patches during that time were already incorporated is unknown, as well as if further flaws within that version of the operating system were be found post-release. Hackers can easily infiltrate one’s system because of an out-dated and unsafe operating system. Another problem is figuring out how to update products. A common question that we should be asking is how a computer-chip company such as Broadcom or Qualcomm plans on updating the billions of chips within the IOTs. Unfortunately, these companies have chosen to turn a blind eye begin working on the next updated model than keeping their older products usable. The problem with this process is that there is no incentive or ability to participate software once it’s been mass-produced and released to the public. It also leaves older devices more susceptible to attacks as attackers can target flaws not found before. Furthermore, to make matters worse, often times components will not use all of the source code and replace those holes with “binary blobs”, or indiscernible binary code. The result of this is that companies are shipping out half-baked devices to consumers that can do just what is advertised and that’s about it.

Additional means of exploitation include taking advantage of the risks and vulnerabilities of a certain language. For example, hackers may be able to take advantage of a C-based device via buffer overflow. This occurs as nothing in C is range-checked by default, so it becomes very easy to overflow a buffer. The result of “buffer overflows” is that it may change the address of a function is returned to. Another example is writing too few characters into a buffer. The problem here is that C will continue processing, possibly expecting another byte or null terminator. This could result in outputting more information or hitting protected memory for a DOS attack. Simple code reviews and analysis before shipping would easily solve these problems but companies often forego this in order to expedite the process.

Lastly, often times hackers are as good with social engineering as they are with computers. Hackers will rely on human interaction and trick people into breaking normal security procedures. The data is obtained from the interaction is then used to access private systems and or additional data.

Pressure must be put upon companies so as not to take the easy way out. Meanwhile, consumers should be informed and alerted when security flaws and patches are released. With the possibility of 20-50 billion IoTs expected to flood consumers homes and business by 2020, the need for security has never been greater.

Preventing future IOT attacks

Although the Internet of Things may promise of a life of ease, the increasing adoption and integration of these devices into our lives and infrastructure bring many vulnerabilities as well. Despite all the problems current IOTs face in terms of security, there are still some things that consumers can do to protect themselves. For instance, one can ensure that all their smart devices have all their security features enabled and using secure passwords on them as well. For those who are more technologically adept, they can also enable all security features on all devices, close unused ports on devices and routers, and utilize encryption for all networks.

Conclusion

As long as this problem is ignored, attacks are only going to become more dangerous and fixing devices will become more expensive. Paying this cost now, through better software engineering and facilitation, is much cheaper than paying the cost of a possible security disaster. Nevertheless, this rapid deployment and installation of IOTs will require much effort from both companies and consumers to tackle and create solutions for the dangers that come along with it.

References

  1. Eastwood, Gary. “5 Of the Biggest Cybersecurity Risks Surrounding IoT Development.” Network World, Network World, 27 June 2017, www.networkworld.com/article/3204007/internet-of-things/5-of-the-biggest-cybersecurity-risks-surrounding-iot-development.html.
  2. Farooq, M. U., et al. “A Review on Internet of Things.” A Review on Internet of Things, International Journal of Computer Applications, Mar. 2015, pdfs.semanticscholar.org/2006/d0fca0546bdeb7c3f0527ffd299cff7c7ea7.pdf.
  3. Gerber, Anna. “Connecting All the Things in the Internet of Things.” IBM – United States, IBM, 3 Jan. 2018, www.ibm.com/developerworks/library/iot-lp101-connectivity-network-protocols/index.html.
  4. Lucciano, Michael. “How Hackers Are Taking Advantage Of IoT Security Vulnerabilities.” Wireless Design and Development, Wireless, 5 Apr. 2017, www.wirelessdesignmag.com/blog/2017/04/how-hackers-are-taking-advantage-iot-security-vulnerabilities.

CRISPR-Cas9

Possibly the most impactful scientific breakthrough in the 21st century with the potential to revolutionize genetics and medicine, but also to exponentially increase the destructive power of biological weapons.

By: Evan West

What Is CRISPR-Cas9?

CRISPR stands for Clustered Regularly Interspaced Short Palindromic Repeats and is the naturally occurring means by which bacteria cells defend themselves against attack from viruses. In bacteria, repeating sequences of DNA known as CRISPR arrays have spacers of DNA between the repetitions that are taken from bacteriophages that have attacked the bacterium before. Bacteria use RNA generated from the CRISPR arrays, known as guide RNA or gRNA, to recognize the bacteriophages and similar ones in the future. Then a nuclease known as Cas(CRISPR associated proteins) is used to cut apart the DNA of the virus thereby rendering it useless. The CRISPR-Cas9 system has been shown by researchers to have applications for manipulating gene expression and for gene editing in animals, plants, viruses, and more. In the system gRNA is used to identify to the system the target DNA sequence which the system cuts out and potentially replaces. Cas9, is the most commonly used protein in the CRISPR system cuts the DNA at the point specified by CRISPR. After the DNA is cut out, the cell will attempt to fix the missing link and, more likely than not, will repair it with mutations that can disable the gene. Disabled genes are good for research as they allow for the study of the effects of that specific gene, ie. what changed now that it’s gone. Alternatively, if a new DNA template is provided to the DNA repair components of the cell then the cell will use that template to fill in the missing information allowing for repairs or modifications to DNA.

CRISPR Versus Other Methods of Gene Modification

Meganucleases, ZFNs, and TALENs are all other means by which DNA cutting and varying levels of modification can be achieved. These other methods are effective, but CRISPR has some key advantages. First, CRISPR is the most efficient method as the structure of CRISPR already includes the means of cutting DNA strands, and therefore it doesn’t need to be used alongside another tool to cut the DNA. Next, CRISPR is more customizable than other options as it can be configured using the built in gRNA which can guide CRISPR to almost any DNA sequence. Finally, CRISPR can be used to cut and even modify multiple different DNA sequences at the same time allowing for researching and potentially changing complex characteristics that are reflected across multiple genes or even multiple chromosomes.

Timeline of CRISPR

In 2000, after almost a decade of work, Francisco Mojica recognized that what had been reported by scientists as unique repeat sequences in bacteria and archaea,  actually shared a common set of features. Mojica coined the term CRISPR to describe these sequences and in 2005 he reported that these sequences that they had discovered in bacteria matched DNA from bacteriophages leading to the hypothesis that CRISPR is part of an adaptive immune system. Also in 2005, another scientist Alexander Bolotin discovered the nuclease Cas9 and he identified that the spacers all share a common sequence at one end, now known as the protospacer adjacent motif(PAM). PAM is required for target recognition. In 2007, scientists demonstrated that CRISPR indeed forms part of the immune system of bacteria and showed that Cas9 is likely the only protein required for interference. Then in 2008, CRISPR was shown by researchers to act on DNA targets which put to end the debate on whether CRISPR acted on DNA or RNA and showed that CRISPR had potential applications in non-bacterial systems. In 2010 CRISPR-Cas9 was first used to cleave DNA and in 2013 CRISPR-Cas9 was harnessed for animal genome editing by Feng Zhang. Currently, most CRISPR research is done on human cells and animal models to determine if the techniques are safe for use on people. A new technique known as CRISPR-Cpf1 was developed in 2015 by the scientists who developed Cas9. Cpf1 allows for simpler, more accurate, and more flexible uses than the Cas9 system.

Potential Medical Applications

Currently, most changes introduced with genome editing are limited to somatic cells. Changes to DNA in somatic cells can result in a multitude of changes in gene expression but these changes will not be passed on to future generations. Changing germline cells, however, can result in edits that are passed down. Which raises ethical questions regarding using this technology to enhance the characteristics of one’s descendants. Therefore, most medical applications will be limited to changes of somatic cells. Research is being done on using CRISPR as a cure for single-gene diseases such as cystic fibrosis, hemophilia, and sickle cell anemia. CRISPR is able to modify multiple genes at the same time and therefore could be a cure for more complex diseases. These could include cancer, heart disease, or HIV. In the pursuit of global research that benefits human health CRISPR had been shipped to 62 countries and shared with 2,339 institutions as of February 2018.

Potential Military or Terror Applications

Technology has no morality. This amoral nature can be demonstrated with a few simple examples. Nuclear technology can be used to provide large amounts of power, relatively cheaply and without contributing to global warming. Nuclear technology is also be used to create the most destructive bombs man has ever made. Computers allow billions of people around the world to connect, share ideas, and even sell or purchase products. Conversely, computers are also be used by black hat hackers to steal information or destroy property. The point here is that technology isn’t explicitly good or bad, but rather the morality of any technology depends on what it is used for. CRISPR is especially dangerous because researchers, inspired by the incredible potential of CRISPR to do good, are sharing their tools and findings with countries and institutions around the world. However, just like with other technologies there are people, organizations, and governments who will see CRISPR as a means to inflict terror, sickness, and destruction.

One such group is the Russian government and Russian researchers have been supplied with the tools to do their own CRISPR research. Russia is on the list of countries that have been supplied with CRISPR even though it’s common knowledge that Russia had a substantial Bioweapons program during the cold war and that they are currently developing the next generation of nuclear weapons with the goal of establishing regional dominance. It’s certainly imaginable that Russia would want to restart(if it ever actually ended) their bioweapons program and utilize CRISPR to further develop the destructive power of their weapons. Additionally, information about how to apply to get tools, basic information, and troubleshooting was easily findable after a quick google search. Coupled with the increasing simplicity, accuracy, and falling cost of CRISPR, this ease of access should raise concerns that radical groups could use this technology for their own purposes in the future. The potential of CRISPR to be used for destruction compelled Director of National Intelligence, James R. Clapper to include genome editing under the heading Weapons of Mass Destruction and Proliferation in February 2016’s Worldwide Threat Assessment of the US Intelligence Community. While the current state of the technology doesn’t allow for the complicated changes necessary to engineer effective weapons, CRISPR is developing rapidly across the globe. Additionally, because of advances in DNA synthesis, computational power, and information sharing it is not a question of if weapons could be developed using CRISPR, but when, who, and how destructive.

“According to biological warfare expert Dr. Steven Block, genetically engineered pathogens ‘could be made safer to handle, easier to distribute, capable of ethnic specificity, or be made to cause higher mortality rates.”

While the ability to control DNA seems like something out of a dystopian science fiction movie, CRISPR has cemented it as reality. Genetically modified super-soldiers, man-made epidemics, and changes to germline cells that negatively affect individuals and their future descendants are all potentially possible. Again, the technology as it stands now doesn’t allow for the complicated editing necessary to accomplish these modifications, but advancements in CRISPR’s abilities are occurring at a rapid pace moving these modifications closer to reality.

Conclusion

The potential of CRISPR as a tool to enhance weapons is terrifying. However, there is solace to be found in the knowledge that CRISPR is also a means by which countries can defend themselves against an attack by a mutated virus. CRISPR allows for the rapid genomic mapping of viral DNA which will give scientists an edge in creating a vaccine or identifying other effective treatments. Of course, it would be better to avoid an attack by a CRISPR modified agent in the first place so it is the opinion of this author that sharing of CRISPR technology and research should be restricted to friendly nations and groups. These restrictions wouldn’t completely stop bad actors from utilizing the technology, but they would certainly make it more difficult and easier for the international community to recognize and condemn those who do attempt to weaponize CRISPR. Unfortunately, it may already be to late for these restrictions to have maximum affect.

Glossary

DNA – deoxyribonucleic acid: the ‘source code’ of cells and life. Guides cell development and thereby the overall gene expression of the entire body.

RNA – ribonucleic acid: Generally used to carry messages from DNA to the proteins it binds with.

Nuclease – Protein that cuts Nucleotides in nucleic acids into smaller units. Typically used to cleave DNA or RNA.

Somatic cells – Cells that are not used for reproduction. In humans these are all cells except for sperm and eggs.

Germline cells – Cells that are used for reproduction.

References

“What Are Genome Editing and CRISPR-Cas9? – Genetics Home Reference.” U.S. National Library of Medicine. Accessed March 15, 2018. https://ghr.nlm.nih.gov/primer/genomicresearch/genomeediting.
Clapper, James R. “Worldwide Threat Assessment of the US Intelligence Community.” February 9, 2016. dni.gov/files/documents/SASC_Unclassified_2016_ATA_SFR_FINAL.pdf.
“CRISPR.” Broad Institute. March 15, 2018. Accessed March 15, 2018. http://www.broadinstitute.org/research-highlights-crispr.
Foley, Mackenzie. “Genetically Engineered Bioweapons: A New Breed of Weapons for Modern Warfare.” DUJS Online. March 11, 2013. Accessed March 15, 2018. http://dujs.dartmouth.edu/2013/03/genetically-engineered-bioweapons-a-new-breed-of-weapons-for-modern-warfare/.
“Genome Editing.” Wikipedia. March 07, 2018. Accessed March 15, 2018. https://en.wikipedia.org/wiki/Genome_editing.

 

Rights and Wrongs of Chemical and Biological Warfare

Introduction

The intended purpose of biological and chemical weapons is to fundamentally endanger lives. However, biological agents are ineffective as military weapons, but seen as a global threat to the human species. Chemical weapons have limited uses so they are considered weapons of terror more so than a military weapon albeit still harmful. The thought of such weapons ever possibly being deployed induces fear, uncertainty in everyday life and large scale panic. Weapons with such mentally and physically destructive consequences, one would unquestionably turn down the idea of such weapons and want them banned, however biological and chemical weapons are great for self defense, relatively easier and cheaper to produce, and logistically more detrimental to the targeted enemy.

Disadvantages

The disadvantages of biological and chemical weapons are much more evident than the advantages. One of the many drawbacks of biological weapons is their unavoidable lasting effect. Once it’s out, the weapon has the potential to unleash massive epidemics of deadly infectious disease. An example being smallpox which we no longer immunize against making it near impossible to stop. If smallpox was released among the unsuspected public again, it could wipe out millions as it is randomly passed from person to person.

On top of its lasting and catastrophic effect, biological and chemical weapons are awfully easy to get in the wrong hands and used for terrorist attacks and other non-beneficial uses. Anyone with a few years of training in chemistry and access to raw materials could easily produce a weapon like sarin gas. Although the possibility of someone on the street could do this seems rather unlikely, it isn’t too hard to hire a chemist with some level of training and easily pull off a deadly attack. Considering how easy it is to produce, it’d be expected to see more terrorist attacks with chemical weapons. However, of those who try to make it, if they do not know what they’re doing, are likely going to die in the process. Despite its low level of difficulty to produce, it is not easy to be successful. The last and most successful attack was in March 1995. Five members from the cult movement Aum Shinrikyo released sarin, a toxic nerve agent, on three lines on the Tokyo Metro during rush hour killing 12 people, severely injuring 50 and causing temporary vision problems for nearly 5,000 others. In a confined space with a large number of people, the subway was the ideal location for the use of chemical weapons.

While biological and chemical weapons are easy to obtain and hard to keep under control, the most toxic trait might be its associated psychological effects. With the mere thought of such weapons to ever be deployed, it is no surprise to see large scale panic in people. The weapons induced malaise, fear, and anxiety on everyday life which may remain high for years on after. This can exacerbate pre-existing psychiatric disorders and the risk of mass sociogenic illness. One example of mass sociogenic illness is a bioterrorism attack at a Washington middle school in September 2001. Paints fumes were released which sent 16 students and a teacher to the hospital. In the following month, over 1,000 students in several schools located in Manila, Philippines flooded the local clinics with ordinary flu-like symptoms after rumors spread via text message alerts. The rumors stated that these symptoms were a result of a bioterrorism attack. In the next few days, a man sprayed an unknown substance into a Maryland subway station which resulted in 35 people reporting symptoms of nausea, headache, and sore throat. The substance turned out to be window cleaner.

Advantages

Despite all the disadvantages over biological and chemical warfare, there are advantages to having these weapons around. Although most countries use it for attacking other countries, biological and chemical weapons are useful for self defense. Another country will be less likely to attack another if the possibility of being attacked in return is probable. Also for its similar destructive power, it is much cheaper and easier to build than the conventional bomb. This is great for nations who want to reduce their budget for defense. Some call these weapons a “poor nation’s atomic bomb” which is quite fitting. As inhumane biological and chemical warfare can be, it can be considered optimal. To wound one individual, which would require two others’ attention and care from the enemy’s side, is more detrimental than attempting to kill one individual.

Discussion

Chemical and biological weapons’ effects often stay around for much longer than we desire. They don’t hold up to the conventional explosive weapon. Drop a conventional weapon, it explodes and shrapnel is released. The damage is done and over relatively faster. But with certain biological weapons, we lose all control once it’s been released and it can go on for an unwanted amount of time. Along with biological warfare being unmanageable, it will affect people that were not targets thus even incapacitating innocent civilians. Once the knowledge of how to produce a weapon like this is out, it’s easy to get into the wrong hands. Rather than used as defense for a country, these weapons can be used against one’s own country. The unnerving thought of an attack ever possibly occurring can be haunting not just on individuals, but on entire societies. The attack in the Washington middle school and incident in Manila is a prime example of mass sociogenic illness. The lasting effects of mass sociogenic illness serve as a reminder of the dangers of unintentionally amplifying the psychological responses to biological and chemical weapons. On top of worsening the psychological responses, it worsened already existing psychological issues as well. These weapons were created with the intention to inflict injury, cause disease, and to kill humans. However, little did we know it would stick around like an guest who overstayed their welcome, easily be used to hurt rather than protect one’s country and worsen an entire society’s psychological response.

Conclusion

Biological and chemical weapons have their advantages and disadvantages. Some might see biological and chemical warfare as a gateway to complete havoc and just simply inhumane; some can see the advantages albeit there are not too many. Seeing how many disadvantages there are compared to number of the advantages of these weapons, the best option might seem to be simply ban weapons of this sort and make illegal. But how effective is that? Just because there’s a rule saying no more biological and chemical weapons, it doesn’t necessarily mean people are going to listen. Once the knowledge of how to produce these weapons gets out, there is no getting it back. It becomes a matter of how to prevent this — not how to stop its existence entirely.

  • https://www.npr.org/2013/05/01/180348908/why-chemical-weapons-have-been-a-red-line-since-world-war-i
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1121425/
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1732455/pdf/v057p00353.pdf
  • https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1193&context=cjil
  • https://www.wagingpeace.org/chemical-and-biological-weapons-use-in-warfare-impact-on-society-and-environment/
  • https://www.news-medical.net/health/Smallpox-Biological-Warfare.aspx
  • https://www.rt.com/op-ed/chemical-weapons-training-attainable-740/

Artificial Super Intelligence

 

People widely think that the development of AI can lead to a future of oppression by machines, but they tend to either underestimate how quickly things can go from Siri to Sky-net.

So first let’s go over the three main levels of artificial intelligence.

Level 1:  Artificial Narrow intelligence (ANI)

Level 1 is what we have in our current day, where you can have AI models beating the best players of GO, or winning games of DOTA 2 against the best players. This is the lukewarm level, where we say “Alright, this is not too concerning, but I can see how it could get worse”

Level 2: Artificial General Intelligence (AGI)

Level 2 is where you have Artificial General intelligence, where the intelligence of AI is at the same level or lower but in most fields. The gap between Artificial Narrow intelligence and Artificial General Intelligence seems to be vast, but until we find how exactly to create AGI, there’s no way to be sure.

But think about this: In today’s world, ANI is found in almost every device that you use. The newest phones now have a specially designed neural net processor that is designed to process images and every Google search is made possible by the use of sophisticated neural networks as well.  There’s soon going to be smart driving cars with ANI uploaded on it, and all this is almost akin to the primordial soup of amino acids that life came from.

ASI on the other hand …

Level 3: Artificial Super Intelligence (ASI)

Level 3 is where things start to get real. This level describes any AI that surpasses human intelligence. It defines everything from only a little bit smarter to more intelligent than the combined intelligence of everyone on Earth.

Artificial Superintelligence will most likely be created by an AGI that has a recursive improvement algorithm built in, as it is very hard to make something that is more intelligent than yourself through the process of manual coding. Once it has become an ASI, it’s also very hard to stop, because it’s both smarter than you and there are many readily available sources that it can replicate itself into.  Not to mention the fact that computers are just inherently better in design than humans they have.

  • Hardware:
    • Speed: Neurons can only run at 200 Hz max and signals transfer at 120 m/s through the neurons while machines can go up to 5 GHz and the circuits travel at the speed of light.
    • Size: The brain is the size it is because of the travel of nerve impulses is restricted to 120 m/s. The computer can infinitely expand RAM and HDD.
    • Reliability: Computers can run 24/7 given enough cooling and don’t deteriorate, unlike human neurons.
  • Software
    • Editability: Easy way to fix software, just edit code while it’s much more different in humans
    • Group ability: Computers can talk to each other at light speed, although so far humans seem to be doing better in this aspect.

Intelligence is an interesting thing because it’s very hard for organisms in lower levels to comprehend a vastly more intelligent being’s actions. We also tend to overestimate the difference between the dumbest and smartest humans. When AGI starts catching up to us, we will only see it as it being smarter for “an animal”/”a computer”. When it hits intelligence of the dumbest human we will also be saying “Wow, now it’s like a dumb human!”, but after a couple hours, it could be smarter than Stephen Hawking.

What happens now?

Remember recursive self-improvement?  The nature of recursive self-improvement is that a dumb AI improving itself, can only do so at a slow pace, but imagine what happens when it’s as smart as Hawking.Because of the recursive self-improvement and the law of accelerating return, it will become much smarter at a much faster pace. The worst part is that we can’t even comprehend how smart it can be. It’s like trying to teach an ant about quantum physics. To put it into our perspective, a normal person has an IQ of around 100, and a smart person maybe 150. Due to recursive self-improvement, maybe an hour after it reached human intelligence it could be hundreds of time more intelligent. There’s not even a word for an IQ of 1000.

As a human being, we know that intelligence equals power, which means that when it does become real, it will become the most powerful being that has ever been made. Things like North Korea and biological weapons to the will look like a scuffle between 2 small anthills. It could just stomp on the two anthills for fun but it could also find the cure for the disease, hunger, and even reverse aging. Trying to convince it not to kill us would be just to rely on its pity if it even has any emotions. The only way we could try to influence it is to shape it’s “childhood” and teach it the right values and pray.

References

Barrat, J. (2013). Our final invention. New York: Thomas Dunne.

Bostrom, N. (2016). Superintelligence. Oxford: Oxford University Press.

En.wikipedia.org. (2018). Superintelligence. [online] Available at: https://en.wikipedia.org/wiki/Superintelligence [Accessed 14 Mar. 2018].

 

 

 

Survey of the Modern United States’ Virtual Nuclear Weapons Testing Program

1992 marked the year negotiations of the Comprehensive Nuclear Test Ban Treaty ended explosive nuclear weapon testing for the United States. Since that time, the United States has never performed another nuclear weapons test in the atmosphere, underwater, or underground; however, the treaty has never prevented the proliferation of nuclear materials or testing of nuclear physics. Because of this, the United States has pursued alternative methods of testing nuclear weapons without the need of conducting explosive tests. Modernizing the United States’ nuclear arsenal has meant the adoption of computational models to simulate virtual nuclear weapons, their effects, and how to maintain and update the arsenal itself. Here we conduct an overview of nuclear laboratories and their computational technology that secure and maintain the nuclear arsenal, discuss the computational methods for simulating the effects of nuclear weapons, introduce the Stockpile Stewardship Program, provide a summary of experiments performed in support of the stockpile, and review the mixed legacy of virtual testing and its future with regards to the U.S. nuclear weapons complex.

Three primary laboratories oversee the nuclear testing facilities in the United States. Operating under the National Nuclear Security Administration (NNSA), the Lawrence Livermore, Los Alamos, and Sandia National laboratories main and secure the United States’ nuclear arsenal. Each lab manages an Advanced Simulation and Computing (ASC) program sponsored by the NNSA to participate in, and advance, the nuclear arsenal’s safety and capabilities. With the use of high performance computers, the goal of the collective ASC program is to improve the confidence in nuclear weapon predictions through simulations, development quantifiable bounds on the uncertainty of computational results, and further increase predictive capabilities by combining simulation and experimental activities within nuclear testing facilities. Developing these models is of the upmost importance for the United States, who are choosing the never use an explosive test again for nuclear weapons.

To achieve these results, multiple supercomputers are utilized within the labs. Specifically, two are housed at the Los Alamos lab, two at Sandia, and one at Lawrence Livermore. These computers can conduct over one thousand trillion instructions per second, allowing for immense computational power when it comes to simulating complex physical systems such as a nuclear explosion. We will conduct a short overview of the primary computing system of each lab before discussing in depth the past, present, and future of the Stockpile Stewardship Program.

Lawrence Livermore

Sierra is the upgraded supercomputer being used at the Lawrence Livermore National Lab. It replaces the Sequoia computer previously maintained there. Sierra is projected to provide upwards to six times the performance as its predecessor. Its funding was provided through the NNSA, and it is going to be used to assist in fulfilling the stockpile stewardship program’s mission to avert underground nuclear testing.

This computer will be used to ensure the support of the NNSA’s ASC program. It is planned to provide experimental computational results in several key scientific areas:

  • Materials Modeling
  • Turbulent flow and instabilities
  • Laser plasma calculations

In addition to simulating the current nuclear arsenal’s capabilities.

Sandia

               Sandia’s participating in the ASC program is extensive. Their main contributions revolve around physics and engineering models, and computational systems. Their models provide extensive research into the U.S. nuclear stockpile by describing the multitude of physical processes that present themselves during a nuclear explosion.

Like the other labs, Sandia applies predictive science-based models to attain their results. In this case, Sandia develops the following physical models:

  • Material strength and damages
  • Radiation and electrical responses
  • Aerodynamics and vibrations
  • Thermal and fluid responses

Each of these models are implemented into the other ASC sponsored labs for use in testing and experimentation.

Los Alamos

The Los Alamos National Laboratory houses two supercomputers: Cielo and Trinity. Both are required support for the Stockpile Stewardship Program. Beginning with Cielo, this supercomputer was developed under a joint effort between Los Alamos and Sandia. It – like the other supercomputers – will be used by all three labs partnered under the ASC program. It has also performed many simulations, include one involving the mitigation of an asteroid. Trinity is a supercomputer designed to perform similar functions.

Stockpile Stewardship Program

Clearly, each of the ACS labs harbors plenty of computational strength to simulate the effects of nuclear explosions and the necessary safety and security capabilities required for maintaining the aging U.S. nuclear arsenal. It is because of this aging arsenal and the need for continued testing that in 1995, the United States created the Stockpile Stewardship Program (SSP). Its mission is to make scientific and technological advancements in order to assess the nuclear weapons arsenal without having to rely on explosive nuclear testing.

This program has a number of specific missions regarding the safety and security of the arsenal. It covers the broad range of weapon life extension, predictive modeling, plutonium science, high-energy-density science, infrastructure improvements, and high-explosive science. All of these programs require the use of the national labs’ supercomputers to properly model each scenario and provide results with actual nuclear testing. Additionally, a number of facilities exist in order to facilitate these experiments.

Each facility exists within either national labs or security sites within the country. We list some of the key facilities along with a brief description of their responsibilities. First, the Z-Machine provides an investigative platform in which scientists can further understand the properties of material, plasma, and radiation. Omega provides a similar platform, but also performs some fusion experiments and is accessible to universities. The Dual-Axis Radiographic Hydrodynamic Test Facility (DARHT)  applies the use of two large X-ray machines to record the interiors of three-dimensional materials and within experiments these materials are subjected to hydrodynamic shock to simulate the implosion process of nuclear bombs. Furthermore, BEEF (Big Explosive Experiment Facility) performs experiments on materials as they are merged together through the use of high-explosive detonations. Plenty of these experimental facilities exist and apply the use of the computational power of the national lab’s supercomputers such that we can maintain the aging nuclear arsenal of the United States.

With the future of the United States’ arsenal in mind, it is important to recognize the faults that still exist within the current program. While the maintenance and simulations of the nuclear arsenal seek to examine and model every detail of the nuclear process – and are doing so successfully in many cases – the management of the missiles and nuclear material themselves may not always be up to par. The deployment of the weapons still requires the use of floppy disks to activate the arsenal while we have supercomputers conducting entire nuclear explosions. This disparity exists in several areas regarding the United States’ arsenal. Additionally, many other countries have been less attentive when it comes to adhering to the Comprehensive Nuclear Test Ban Treaty. By refusing to fully adhere to the treaty, other countries can continue to develop new nuclear weapons and test them in similar methods as the United States did before they banned testing in the nineties.

In general, the United States virtual weapons testing program is reasonably extensive. With the goal of simulating every process of a nuclear detonation, it is an understandably complex and difficult problem to solve. Avoiding the environmental damage caused by classic nuclear testing is imperative, though, so having these virtual facilities is an important and continuing step forward in the development of nuclear weapons and testing for the United States.

 

References

[1] https://wci.llnl.gov/about-us/weapon-simulation-and-computing
[2] https://www.files.ethz.ch/isn/135139/DH17.pdf
[3] https://nnsa.energy.gov/aboutus/ourprograms/defenseprograms/futurescienceandtechnologyprograms/asc
[4] https://nnsa.energy.gov/aboutus/ourprograms/defenseprograms/stockpilestewardship
[5] https://nnsa.energy.gov/sites/default/files/Quarterly%20SSP%20Experiment%20Summary-Q1FY15.pdf
[6] http://large.stanford.edu/courses/2011/ph241/hamman2/
[7] https://www.ucsusa.org/nuclear-weapons/us-nuclear-weapons-policy/us-nuclear-weapons-arsenal
[8] https://en.wikipedia.org/wiki/United_States_Department_of_Energy_national_laboratories

[9] http://www.sandia.gov/missions/nuclear_weapons/about_nuclear_weapons.html

[10] http://www.lanl.gov/projects/cielo/index.php

[11]  http://www.lanl.gov/asc/

[12] https://asc.llnl.gov/coral-info

[13] http://www.sandia.gov/asc/computational_systems/index.html

Advanced Persistent Threats

Threats In the Cyber World

Cyber attacks nowadays are slowly growing to be more sophisticated, more serious, and more persistent. Ever since the gradual switch in IT infrastructure towards mobility and cloud computing, hackers and large organized cyber crime organizations have proliferated, thriving in this new, “target-rich” environment. Of these many new threats on the horizon, are highly targeted, long-term, international espionage and sabotage-based computer processes: advanced persistent threats.

What Are Advanced Persistent Threats?

Advanced persistent threats (APTs) is a targeted set of stealthy and continuous computer hacking processes, usually used to target specific entities. APTs are defined by its named requirements:

  • Advanced: Operators behind the APT have extensive knowledge, tools, and techniques at their disposal. These operators are capable of combining multiple targeting methods and tools to compromise and maintain access to a target.
  • Persistent: Operators target specific tasks, steadfast despite there being  financial gains to be made. They do through continuous monitoring and interaction, prioritizing their need to be stealthy and undetected, even if it takes a long period of time.
  • Threat: APTs are a threat due their having intent and capability. APT attacks are well-coordinated, executed by highly intelligent operators that are well-funded with a clear goal in mind.

An important distinction to make is that while all APTs are targeted attacks, not all targeted attacks are APTs. Here are some unique traits that set APTs apart from targeted attacks:

  • Customized attacks: APTs often use customized tools and intrusion techniques specific for the task they are designated to perform. Some of these tools include the zero-day vulnerability exploit, viruses, worms, and root-kits. On occasion, APTs have been noted to provide a “sacrificial threat”, a threat provided such that when removed, the victim is tricked into thinking the threat has been removed.
  • Low-and-slow: APTs occur over long periods of time, during which the attackers operate stealthily, avoiding detection until their goal has been achieved. Where most targeted attacks are opportunistic, APT attacks are more methodical and go to extraordinary lengths to avoid detection.
  • Ulterior purpose: In contrast with most run-of-the-mill targeted attacks, APTs are usually funded and used by military and state intelligence. APTs have been known to be used for gathering confidential intel around the world, as well as disrupting operations, destroying technology, and even international sabotage, usually producing far grander results than what initially meets the eye.
  • Specificity: Although most computer systems are vulnerable to APTs, the users of APTs are usually very specific about their targets, which usually concerns a very small pool of organizations. APTs have been widely reported to attack government agencies and facilities, defense contractors, and other producers of goods on an international scale.

In addition to these traits, APTs also have a defined set of criteria:

  • Objectives: who or what is being targeted, and why?
  • Timeliness: how much time is spent doing reconnaissance on your target?
  • Resources: what do you need to know in order to carry out the task?
  • Risk tolerance: how much are you willing to sacrifice to stay undercover?
  • Skills and methods: what tools and techniques will need to be used?
  • Actions: what exactly will your planned threat do?
  • Attack organization points:  from which points will your threat start at?
  • Numbers involved with attack: how many systems will be involved and which have more importance/weight?
  • Knowledge source: how much is known about your planned threat?

How do APT attacks operate?

Basic APT attacks tend to be executed in four stages: incursion, discovery, capture, and exfiltration (specifically in this order).

Incursion: attackers break into a targeted network using social engineering or even zero-day vulnerabilities  to infect systems with malware.

  • Social Engineering: a technique that baits targeted people to open links or attachments that seemingly come from trusted sources/individuals.
  • Zero-day vulnerabilities: security loopholes that usually stem from software. Only very sophisticated attackers use these, as zero-day vulnerabilities are quite hard to discover.

Discovery: attackers take their time and avoid detection while mapping out an organization’s systems and scanning for their confidential data such as exposed credentials.

Capture: attackers capture intel over a long period of time as well as, on occasion, secretly installing malware to configure the control of the environment.

  • Control: APTs sometimes take the opportunity to take control of software/hardware systems, such as in the case of Stuxnet. Stuxnet, in addition to capturing intel, also reprogrammed industrial control systems responsible for managing gas pipelines, power plants, oil refineries, etc. APTs are even capable of not just reprogramming, but even destroying said systems.

Exfiltration: captured intel is sent back to the attackers for analysis and further exploitation.

 

How do we detect APTs?

Although APTS, by nature, are meant to be difficult to detect, they do exhibit a couple of key indicators that can be observed.

  • Increases in activity at odd hours, when employees/people wouldn’t usually be accessing the network.
  • Discovering wide-spread backdoor Trojans, which are used to maintain access to a system, even if its discovered and the system credentials changed.
  • Unexpectedly large flows of data from internal origins to possibly external systems.
  • Discovering mysterious data bundles. Attackers typically aggregate data over a period of time before sending it out of the network. These data bundles can be found in places where other data isn’t normally stored.

Why is this dangerous?

Although it has been stated that APTs usually target larger, more important organizations, the threat of it being used on the average Joe’s system is still very much there. As cloud computing and the number of distributive systems increases, so does the number of cyber criminals capable of delivering an APT attack. This is not unlike what happened with nuclear weapons. At first, only countries with massive amounts of funding were capable financially of creating a nuclear weapon. However, as time went on, as information and blueprints and resources started being shared around the world, a number of smaller countries and states started displaying the capability of creating nuclear weapons; the same with APTs. As previously mentioned, APTs used to focus on large organizations such as governments. Nowadays APTs have been seen to target smaller organizations, such as Target Corporation, the second largest discount retailer in the United States (not even first!) According to reports in 2013, Target lost a large number of credit and debit card numbers to an APT, decreasing their sales by almost 50%, with customers even proclaiming their intent to never return. This further accentuates the point that, not only will APTs start being more and more prominent in society due their not just affecting large, even secret organizations, but also easier to execute with the vast knowledge found on the Internet nowadays.

Closing thoughts on APTs

APTs, once used primarily to target high-profile organizations or companies with high-value data, are now becoming more common among smaller and less-prominent companies. As attackers are turning to more sophisticated methods of attack, companies of all sizes possibly even normal users must look to establish rigorous security capable of detecting and responding to these threats. Even then, establishing additional security may not do much, as even basic security measures such as encryption and decryption wouldn’t offer much if an existing APT is monitoring the encryption and decryption processes. Adi Shamir, the godfather of modern cryptography, sums up APTs and their relevance nowadays in a panel session on how cryptography is slowing becoming less and less relevant:

“In the Second World War if you had good crypto protecting your communication you were safe. Today with an APT sitting inside your most secure computer systems, using cryptography isn’t going to give you much protection.”

– Adi Shamir

It’s truly a dangerous world.

 

References

  1. Lord, Nate. “What Is an Advanced Persistent Threat? APT Definition.” Digital Guardian. July 27, 2017. Accessed March 18, 2018. https://digitalguardian.com/blog/what-advanced-persistent-threat-apt-definition
  2. Advanced Persistent Threats: A Symantec Perspective. PDF. Symantec.
  3. Leyden, John. “Prepare for ‘post-crypto World’, Warns Godfather of Encryption.” • The Register. Accessed March 18, 2018. http://www.theregister.co.uk/2013/03/01/post_cryptography_security_shamir
  4. “Advanced Persistent Threats – Learn the ABCs of APT: Part A.” Secureworks. Accessed March 18, 2018. http://www.secureworks.com/blog/advanced-persistent-threats-apt-a.
  5. Higgins, David. “​The Growing Challenge of Advanced Persistent Threats.” CSO | The Resource for Data Security Executives. March 23, 2018. Accessed March 23, 2018. https://www.cso.com.au/article/603403/growing-challenge-advanced-persistent-threats/.

 

 

 

 

An Overview of Potential Dangers Arising From High Energy Experiments

By Justin Raizes

A Brief History

Over the past several decades, the desire to explore particle physics has motivated the construction of higher and higher energy particle accelerators. As these accelerators have been built, concerns over the safety of the experiments have arisen.

In 1999, during the construction of the Relativistic Heavy Ion Collider (RHIC), an article in The Scientific American titled “A Little Big Bang” spurred several entries in the Letters to the Editor section of The Scientific American about the safety of the new collider. One of the submitters, Michael Cogill, was concerned in general about “somehow [altering] the underlying nature of things such that it cannot be restored”, while the other, Walter Wagner, had more specific concerns about the possibility of creating a miniature black hole. Frank Wilczek of the Institute for Advanced Study in Princeton, N.J. responded to the letters with reassurance that such a disaster was very unlikely. Nevertheless, the media quickly latched onto the concept and trumpeted it with alarming titles such as “A Black Hole Ate My Planet” and “A ‘big bang’ machine”.

In response, Brookhaven National Laboratories, the commissioners of the RHIC, asked a panel of scientists to review the speculative disaster scenarios and assess the safety of the project. The panel ultimately found the project to have a high safety margin, and it proceeded.

These issues rose again in 2008 prior to the first run of the Large Hadron Collider (LHC). Again, media published articles with alarming titles such as “The Final Countdown”, and there was even a lawsuit filed against the project seeking a restraining order. CERN, the commissioners of the LHC, asked a panel of scientists to review the results of the 2003 study into the safety of the LHC. Again, the panel ultimately found the project to have a high safety margin, and it proceeded.

Overview of Disaster Scenarios Considered

The concerns which arose generally fell into one of three categories:

  1. Formation of a miniature black hole or other gravitational singularity which absorbs matter.
  2. Triggering of a vacuum instability.
  3. Formation of “strangelets” which absorb matter.

Miniature Black Holes

The black hole is a widely recognized phenomenon, even if it is not well understood by the average layman. A black hole consists of extraordinarily dense matter, to the point where space-time begins to warp and it absorbs surrounding matter.

If a miniature black hole were to form on Earth, it would begin to eat away at the surrounding matter, eventually consuming the Earth. However, as shown by Giddings and Mangano in 2008, this would occur at an extremely slow rate. In fact, the formation of a miniature black hole would not significantly reduce the lifespan of the Earth. Furthermore, other effects, such as thermal impact, would also not significantly change the condition of the Earth. With respect to direct human impact of a miniature black hole, we are reassured by Peter Fisher’s statement that “a fast moving black hole with the mass of the moon (radius of a proton) will go right through you with no damage.”

Of course, for any of this to happen, a miniature black hole would actually have to be formed on Earth. The RHIC safety panel considered both classical and quantum gravitaty. They determined that the masses and distances involved in the RHIC are much too small and large (respectively) to create any sort of black hole, and that the probability of emitting a graviton at the RHIC was on the order of 10^-34. Additionally, cosmic rays regularly collide with much more energy than that present at the RHIC, and we have observed no formation of a black hole within our solar system’s vicinity.

Vacuum Instability

Contrary to how a layman thinks of empty space, empty space is actually highly structured, and can exist in various states. In quantum mechanics, a vacuum is the state of lowest possible energy. It has been theorized that our current vacuum is only a false vacuum, having a locally, but not globally,  minimal energy. If this is true, then a sufficiently violent disturbance might trigger a decay into a different state. If such a decay occurred, it would spread throughout the universe at the speed of light, and be “catastrophic”.

The 1999 panel investigating the safety of the RHIC claimed that “theory strongly suggests that  any possibility for triggering vacuum instability requires substantially larger energy densities than RHIC will provide”. However, rather than simply rely on this, they also brought up the point that cosmic ray collisions have been occurring throughout the history of the universe, and concluded that “if such a transition were possible it would have
been triggered long ago.” For this point, they cited Hut and Rees’ 1983 work detailing the number of cosmic ray collisions whose effects we have observed and examining the probability of these past cosmic ray collisions triggering an observable vacuum phase transition.

Strangelets

A “strangelet” is a form of quark matter which contains many strange (s) quarks. Under either high pressure or high temperature, quarks are no longer bound to their individual hadrons. One of the primary goals of the RHIC was to provide evidence of quark-gluon plasma, the state induced by high temperature. Quark-gluon plasma can be “accurately described as a gas of nearly freely moving quarks and gluons”. On the other side of the spectrum, quark matter is the name given to such matter which is under high pressure and low temperature.

Ordinary matter is primarily composed of up (u) and down (d) quarks, the lightest varieties. As quark matter is compressed, the Pauli Exclusion Principle – that no two quarks within the same quantum system can share a state – forces quarks into higher and higher energy state. Eventually, the up and down quarks will become strange quarks in order to reduce energy. By the time equilibrium has been reached, there will be a finite density of strange quarks.

The dangerous strangelets are those that are both negatively charged and stable enough to come to a rest in ordinary matter. Once such a quark has done so, however, the results are catastrophic. It would be captured by some ordinary nucleus within the environment, quickly fall into the lowest Bohr orbit, and react with the nucleus, absorbing several neutrons to form a larger strangelet. The reaction would be exothermic, and afterwards, the strangelet would have positive charge. However, if the energetically preferred charge were negative, it would quickly return to a negative state by absorbing surrounding electrons.  This process would continue until the strangelet’s radius approached the electron Compton wavelength 4×10^-11, at which point it begins to behave differently. Its baryon number would be on the order of 10^6, and it would begin to trigger electron positron pair creation. The positrons would surround the strangelet as a Fermi gas. Any atom which approached the strangelet would be stripped of its electrons by electron-positron annihilation, and the bare nucleus would be absorbed by the strangelet core. The panel reviewing the safety of the RHIC remarked “We know of no barrier to the rapid growth of a dangerous strangelet.”

Fortunately, it was concluded that the formation of such a strangelet at the RHIC was extremely unlikely. Strangelets are cold, dense matter, and heavy ion collisions are hot. Thus, the second law of thermodynamics works against the formation of strangelets at the RHIC. Additionally, negatively charged strangelets require many strange quarks. However, it is more difficult to produce a strangelet with many strange quarks. These two reasons not only show that it is unlikely that a dangerous strangelet will be formed in the first place, but also show that a dangerous strangelet would likely be too unstable to reach ordinary matter and begin growing.

Again, rather than simply rely on theory, the RHIC safety panel also brought up experimental evidence from cosmic ray collisions.  They computed the number of heavy ion collisions taking place on our (relatively) nearby friend the Moon, and observe that the Moon is, in fact,  not made of strange matter (or cheese). They computed that over the 5 billion year lifetime of the moon, roughly 10^11 dangerous strangelets would have been formed from cosmic ray collisions. However, none of these strangelets survived contact with lunar soil. Using extremely conservative estimates, the RHIC safety panel placed a safety factor of nearly 10^22 between the values of constants which would be required to cause alarm and the actual values of said constants. In short, we are not likely to turn into strange matter anytime soon.

Bigger, Badder Colliders

It is quite possible that we will create even larger colliders in the future. Within the span of a decade, we upgraded from the RHIC to the LHC. However, the incredible safety margins placed on current colliders make it extremely unlikely that we will create an Earth-destroying collider anytime soon. Furthermore, these safety concerns are revisited each time we build a new collider. During the creation of the LHC, all the safety concerns about the RHIC were considered, and all were found to have high safety margins, mostly relying on cosmic ray data. So, you don’t have to be concerned about being eaten as a snack by a particle physics experiment gone wrong – at least for now, anyways.

References

F. Carus, “The final countdown?” The Guardian, September 2008. [Online]. Available: The Guardian, http://theguardian.com [Accessed February 20, 2018].

J. Ellis, G. Guidice, M. Mangano, I. Tkachev, and U. Wiedemann, “Review of the safety of LHC collisions,” Journal of Physics G: Nuclear and Particle Physics, vol. 35, no. 11, p. 115004, September 2008.

K. Locock, “A ‘big bang’ machine,” ABC Science,  July 1999. [Online]. Available: ABC Science, http://abc.net.au [Accessed February 26, 2018]

M. Tegmark and N. Bostrom, “Is a doomsday catastrophe likely?” Nature, vol. 438, p. 754, December 2005.

R. Jaffe, W. Busza, F. Wilczek, and J. Sandweiss, “Review of speculative “disaster scenarios” at RHIC,” Reviews of Modern Physics,  vol 72  iss. 4, October 2000.

R. Matthews, “A black hole ate my planet,” New Scientist, August 1999. [Online]. Available: New Scientist, http://newscientist.com [Accessed February 20, 2018]

S. Giddings and M. Mangano, “Astrophysical implications of hypothetical stable TeV-scale black holes,” Physical Review D, vol 78, August 2008.

T. Leonard, “‘Big Bang’ machine could destroy the planet, says lawsuit,” The Telegraph, April 2008. [ Online] Available: The Telegraph, http://telegraph.co.uk [Accessed February 26, 2018]

W. Wagner and F. Wilczek, “Black Holes at Brookhaven,” Scientific American, p. 8,  July 14, 1999.

Stuxnet

Stuxnet: The Grandfather of Cyber Weapons

Stuxnet, the world’s first known cyber weapon, not only had technical and political ramifications of using a cybersecurity exploit as a key player in the Iran nuclear negotiations, but more importantly, it cements cyber weapons as a non-trivial defensive and offensive tool in the modern nuclear age. First discovered in 2010, Stuxnet was a computer worm that exploited a vulnerability in the Siemens software of Iran’s nuclear computers, causing their Uranium enrichment centrifuges at the Natanz nuclear enrichment facility to rotate out of control and eventually explode. This paper will examine the technical logic and implementation behind the Stuxnet attack, its discovery, impact on the Iran nuclear program, and its precedence as the first global cyber weapon.

How does Stuxnet work?

The goal behind Stuxnet was to hinder or disable Iran’s efforts to become a nuclear state, and thus was engineered to fulfill that design decision. Consequently, all of Stuxnet’s capabilities revolve around its ability to execute a targeted and contained attack on Iran’s nuclear computing units specifically. On Iranian nuclear control systems, normal use is as follows. The Siemens Step 7 software is used to program industrial systems, which is transferred to the PLC (Programmable Logic Controller) which runs the centrifuges. In turn, Windows database software is used to store important information about the centrifuge such as including its speed, or notification of potential errors. Stuxnet managed to successfully exploit zero-day, or previously unknown or undiscovered vulnerabilities in the Siemens Step 7 and Microsoft software, to incapacitate the centrifuges while remaining undetected.

The most commonly cited mechanism Stuxnet uses to gain access to the computer network is though an infected USB drive, and automatically load itself to computers with open file sharing. From there, it used the default password of the Siemens Step 7 to gain access to the database and load itself onto the computer. To propagate to other computers on the network, it was able to infect PLC datafiles and copy itself to the datafile. It also has a peer-to-peer update mechanism to update all instances once one of them gains control at the system level. The last step of gaining access is to check that the PLC is controlling at least 155 total frequency converters, a little under the known amount of Iranian centrifuge control. This verifies that Stuxnet is specifically targeting the Iranian centrifuges only. Once it loads malicious code to the PLC, it also verifies that the motors are 800Hz-1200Hz as an additional check that it is indeed on the correct centrifuge controller.

At this point, Stuxnet is ready to execute the attack. It increases the centrifuge frequency to 1410Hz for 15 minutes, then sleeps to avoid detection. After 27 days, it slows the frequency to 2Hz and sleeps again. The process is repeated, speeding up and slowing down the centrifuge. To avoid detection, it would send the correct frequency of 800-1200 Hz back to the database, and in the case of a failsafe, it would run the centrifuges at normal frequency. Additionally, Stuxnet used stolen RealTek certificates to avoid detection from antivirus software. Overall, Stuxnet used four different zero-day vulnerabilities in two different operating systems, in a highly complex and targeted cyber attack that was completely unprecedented in scope and ultimately effective in its attack and stealth.

Discovery

Stuxnet was discovered by Sergey Ulasen under the internet security company VirusBlokAda, and later Kaspersky. While working on a customer complaint that their computer kept rebooting, he discovered that the Stuxnet malware was on the computer. Both Siemens and Microsoft have security patches that address the flaws exploited by Stuxnet, although Microsoft failed to do so on the first try, requiring two additional updates. It is estimated that Stuxnet affected a little under 1000 Iranian centrifuges. The Stuxnet attack is widely thought to be credited to Israel and the United States, as both countries were concerned with the progression of the Iranian nuclear program, but neither country has publicly confirmed their involvement.

Impact

Stuxnet is estimated to have set back the Iran nuclear program by 2 years. Despite Stuxnet, Iran was revealed to be a nuclear state in the mid 2000’s. More significantly, however, Stuxnet was proof that cyber attacks could impact the physical world, and be used to damage physical infrastructure. In the age of technology, modern warfare will increasingly rely on cyber weapons like Stuxnet to weaken enemy resources. Additionally, the code of Stuxnet is available on the internet, making it an open source cyber weapon potentially capable of attacking power grids, nuclear plants, or other infrastructure if the source code is accurately altered. Stuxnet makes it extremely clear the need for strong security practices as we move on to an increasingly digital, and increasingly vulnerable world.

Written by Sabrina Tsui

Sources

Corera, Gordon. “What Made the World’s First Cyber-Weapon so Destructive?” BBC IWonder, BBC.
Holloway, Michael. Stuxnet Worm Attack on Iranian Nuclear Facilities. 16 July 2015.
“Interview with Sergey Ulasen, The Man Who Found The Stuxnet Worm.” Nota Bene Eugene Kasperskys Official Blog.
“Iran Nuclear Program.” Wikipedia, Wikimedia Foundation.
Jones, Brad. “The Legacy of Stuxnet.” Digital Trends, 7 Mar. 2016.
Katz, Yaakov. Stuxnet Virus Set Back Iran’s Nuclear Program by 2 Years. 15 Dec. 2010.
Krebs, Brian. Microsoft Fixes Stuxnet Bug Again. 10 Mar. 2015.
Nachenberg, Carey. “Dissecting Stuxnet.” Stanford University.
“Protecting Productivity – Integrated Industrial Security.” Patches and Updates – Industrial Security – Siemens.
“Stuxnet.” Wikipedia, Wikimedia Foundation.