IEEE SPECTRUM

30 May. 2023


Tens of thousands of tech workers have been laid off by companies recently, including at Amazon, Dropbox, GitHub, Google, Microsoft, and Vimeo. Startups, too, have made cuts, according to TechCrunch.

To help IEEE members cope with losing a job, The Institute asked Chenyang Xu for advice. The IEEE Fellow is president and cochairman of Perception Vision Medical Technologies, known as PVmed. The global startup, which is involved with AI-powered precision radiotherapy and surgery for treating cancer, is headquartered in Guangzhou, China. Xu was formerly general manager of the Siemens Technology to Business North America.

In past articles, Xu has provided guidance for startups, such as steps they can take to ensure success, where founders can find financing, and how to be a global entrepreneur.

Included with his advice are ways IEEE can help.

Beef up your tech and leadership skills with online courses

Although Xu isn’t a financial advisor, he says the first thing to do when you lose your job is to “slim down financially.” Do what it takes to make sure you have enough money to support yourself and your family until you land your next job, he says.

“Don’t assume you’ll find a job right away,” he cautions. “You might not find one for six months, and by then you could become bankrupt.”

To help unemployed members keep costs down, IEEE offers a reduced-dues program. For those who have lost their insurance coverage, the organization offers group insurance plans.

After attending to your finances, Xu says, the next step is to reflect on your career.

“Being laid off gives you some breathing room,” he says. “When you were working, you had no choice in what kind of work you had to do. But now that you’re laid off, you need to think about your career in 5 to 10 years. You now have experience and know what you like to do and what you don’t.”

Ask yourself what makes you fulfilled, he says, as well as what makes you happy and what makes you feel valued. Then, he says, start looking for jobs that check all or some of the boxes.

“Now that you’re laid off, you need to think about your career in 5 to 10 years. You now have experience and know what you like to do and what you don’t.”

Once you’ve figured out what your long-range career plan is, you most likely will need to learn new skills, Xu says. If you’ve decided to change fields, you’ll need to learn even more.

IEEE offers online courses that cover 16 subjects. There are classes, for example, on aerospace, computing, power and energy, and transportation. The emerging technologies course offerings cover artificial reality, blockchain technology, virtual reality, and more.

Several leadership courses can teach you how to manage people. They include An Introduction to Leadership, Communication and Presentation Skills, and Technical Writing for Scientists and Engineers.

Help with finding jobs and consulting gigs

Looking for a new position? The IEEE Job Site lists hundreds of openings. Job seekers can upload their résumé and set up an alert to be notified of jobs matching their criteria. The site’s career-planning portal offers services such as interview tips and help with writing résumés and cover letters.

IEEE-USA offers several on-demand job-search webinars. They cover topics such as how to find the right job, résumé trends, and healthy financial habits. You don’t have to live in the United States to access them.

To earn some extra money, consider becoming a consultant, Xu says.

“Consulting can be an excellent bridge to bring in income while working to secure the next job when facing the situation that your job search may take months or longer,” he says. “For some, consulting can be the next job.”

IEEE-USA’s consultants web page offers a number of services. For example, members can find an assignment by registering their name in the IEEE-USA Consultant Finder. Those who want to network with other consultants can use the site to search for them by state or by IEEE’s U.S. geographic regions. The website also offers resources to help consultants succeed, such as e-books, newsletters, and webinars.

To determine how much to charge a client, the IEEE-USA Salary Service provides information from IEEE’s U.S. members about their compensation and other details.

IEEE Collabratec’s Consultants Exchange offers networking workshops, educational webinars, and more.

If you are financially able and have the right ideas and expertise, Xu says, another option might be to launch your own company.

The IEEE Entrepreneurship program offers a variety of resources for founders. Its IEEE Entrepreneurship Exchange is a community of tech startups, investors, and venture capital organizations that discuss and develop entrepreneurial ideas and endeavors. There’s also a mentorship program, in which founders can get advice from an experienced entrepreneur.

The benefits of networking and social media

Don’t overlook the power of networking in finding a job, Xu advises.

“You need to reach out to as many people as possible,” he says.

You’re likely to meet people who could help you at your IEEE chapter or section meetings and at IEEE conferences, Xu says.

“You will be surprised about how many contacts you can meet who might help you find a job, mentor you, or give you information about a company that might be hiring,” he says.

Take advantage of LinkedIn and other professional social media outlets, Xu suggests. He adds that you should let your followers know you are looking for a position.

If you are knowledgeable about a specific topic, he encourages posting your thoughts about it to display your expertise to prospective employers.

Consider joining the IEEE Collabratec networking platform. Members have access to IEEE’s membership directory, where they can find contacts who might help them find a job. They also can join communities of members who are working in their technical areas, such as artificial intelligence, consumer technology, and the Internet of Things.

Relocation can be an adventure

If you are still having a hard time finding a job, consider moving to a different region of your country—or to another country—where jobs are more plentiful, Xu says.

“Relocating,” he says, “may open up whole new opportunities or adventures that are fulfilling to you or your family.”

28 May. 2023


I love plants. I am not great with plants. I have accepted this fact and have therefore entrusted the lives of all of the plants in my care to robots. These aren’t fancy robots: They’re automated hydroponic systems that take care of water and nutrients and (fake) sunlight, and they do an amazing job. My plants are almost certainly happier this way, and therefore I don’t have to feel guilty about my hands-off approach. This is especially true that there is now data from roboticists at the University of California, Berkeley, to back up the assertion that robotic gardeners can do just as good a job as even the best human gardeners can. In fact, in some metrics, the robots can do even better.


In 1950, Alan Turing considered the question “Can Machines Think?” and proposed a test based on comparing human versus machine ability to answer questions. In this paper, we consider the question “Can Machines Garden?” based on comparing human versus machine ability to tend a real polyculture garden.

UC Berkeley has a long history of robotic gardens, stretching back to at least the early ’90s. And (as I have experienced) you can totally tend a garden with a robot. But the real question is this: Can you usefully tend a garden with a robot in a way that is as effective as a human tending that same garden? Time for some SCIENCE!

AlphaGarden is a combination of a commercial gantry robot farming system and UC Berkeley’s AlphaGardenSim, which tells the robot what to do to maximize plant health and growth. The system includes a high-resolution camera and soil moisture sensors for monitoring plant growth, and everything is (mostly) completely automated, from seed planting to drip irrigation to pruning. The garden itself is somewhat complicated, since it’s a polyculture garden (meaning of different plants). Polyculture farming mimics how plants grow in nature; its benefits include pest resilience, decreased fertilization needs, and improved soil health. But since different plants have different needs and grow in different ways at different rates, polyculture farming is more labor-intensive than monoculture, which is how most large-scale farming happens.

To test AlphaGarden’s performance, the UC Berkeley researchers planted two side-by-side farming plots with the same seeds at the same time. There were 32 plants in total, including kale, borage, Swiss chard, mustard greens, turnips, arugula, green lettuce, cilantro, and red lettuce. Over the course of two months, AlphaGarden tended its plot full time, while professional horticulturalists tended the plot next door. Then, the experiment was repeated, except that AlphaGarden was allowed to stagger the seed planting to give slower-growing plants a head start. A human did have to help the robot out with pruning from time to time, but just to follow the robot’s directions when the pruning tool couldn’t quite do what the robot wanted it to do.

An overhead view of four garden plots that look very similar showing a diversity of healthy green plants. The robot and the professional human both achieved similar results in their garden plots.UC Berkeley

The results of these tests showed that the robot was able to keep up with the professional human in terms of both overall plant diversity and coverage. In other words, stuff grew just as well when tended by the robot as it did when tended by a professional human. The biggest difference is that the robot managed to keep up while using 44 percent less water: several hundred liters less over two months.

“AlphaGarden has thus passed the Turing test for gardening,” the researchers say. They also say that “much remains to be done,” mostly by improving the AlphaGardenSim plant-growth simulator to further optimize water use, although there are other variables to explore like artificial light sources. The future here is a little uncertain, though—the hardware is pretty expensive, and human labor is (relatively) cheap. Expert human knowledge is not cheap, of course. But for those of us who are very much nonexperts, I could easily imagine mounting some cameras above my garden and installing some sensors and then just following the orders of the simulator about where and when and how much to water and prune. I’m always happy to donate my labor to a robot that knows what it’s doing better than I do.

“Can Machines Garden? Systematically Comparing the AlphaGarden vs. Professional Horticulturalists,” by Simeon Adebola, Rishi Parikh, Mark Presten, Satvik Sharma, Shrey Aeron, Ananth Rao, Sandeep Mukherjee, Tomson Qu, Christina Wistrom, Eugen Solowjow, and Ken Goldberg from UC Berkeley, will be presented at ICRA 2023 in London.

27 May. 2023


For more than a century, utility companies have used electromechanical relays to protect power systems against damage that might occur during severe weather, accidents, and other abnormal conditions. But the relays could neither locate the faults nor accurately record what happened.

Then, in 1977, Edmund O. Schweitzer III invented the digital microprocessor-based relay as part of his doctoral thesis. Schweitzer’s relay, which could locate a fault within the radius of 1 kilometer, set new standards for utility reliability, safety, and efficiency.

Edmund O. Schweitzer III


Employer:

Schweitzer Engineering Laboratories

Title:

President and CTO

Member grade:

Life Fellow

Alma maters:

Purdue University, West Lafayette, Ind.; Washington State University, Pullman

To develop and manufacture his relay, he launched Schweitzer Engineering Laboratories in 1982 from his basement in Pullman, Wash. Today SEL manufactures hundreds of products that protect, monitor, control, and automate electric power systems in more than 165 countries.

Schweitzer, an IEEE Life Fellow, is his company’s president and chief technology officer. He started SEL with seven workers; it now has more than 6,000.

The 40-year-old employee-owned company continues to grow. It has four manufacturing facilities in the United States. Its newest one, which opened in March in Moscow, Idaho, fabricates printed circuit boards.

Schweitzer has received many accolades for his work, including the 2012 IEEE Medal in Power Engineering. In 2019 he was inducted into the U.S. National Inventors Hall of Fame.

Advances in power electronics

Power system faults can happen when a tree or vehicle hits a power line, a grid operator makes a mistake, or equipment fails. The fault shunts extra current to some parts of the circuit, shorting it out.

If there is no proper scheme or device installed with the aim of protecting the equipment and ensuring continuity of the power supply, an outage or blackout could propagate throughout the grid.

Overcurrent is not the only damage that can occur, though. Faults also can change voltages, frequencies, and the direction of current.

A protection scheme should quickly isolate the fault from the rest of the grid, thus limiting damage on the spot and preventing the fault from spreading to the rest of the system. To do that, protection devices must be installed.

That’s where Schweitzer’s digital microprocessor-based relay comes in. He perfected it in 1982. It later was commercialized and sold as the SEL-21 digital distance relay/fault locator.

Inspired by a blackout and a protective relays book

Schweitzer says his relay was, in part, inspired by an event that took place during his first year of college.

“Back in 1965, when I was a freshman at Purdue University, a major blackout left millions without power for hours in the U.S. Northeast and Ontario, Canada,” he recalls. “It was quite an event, and I remember it well. I learned many lessons from it. One was how difficult it was to restore power.”

He says he also was inspired by the book Protective Relays: Their Theory and Practice. He read it while an engineering graduate student at Washington State University, in Pullman.

“I bought the book on the Thursday before classes began and read it over the weekend,” he says. “I couldn’t put it down. I was hooked.

“I realized that these solid-state devices were special-purpose signal processors. They read the voltage and current from the power systems and decided whether the power systems’ apparatuses were operating correctly. I started thinking about how I could take what I knew about digital signal processing and put it to work inside a microprocessor to protect an electric power system.”

The 4-bit and 8-bit microprocessors were new at the time.

“I think this is how most inventions start: taking one technology and putting it together with another to make new things,” he says. “The inventors of the microprocessor had no idea about all the kinds of things people would use it for. It is amazing.”

He says he was introduced to signal processing, signal analysis, and how to use digital techniques in 1968 while at his first job, working for the U.S. Department of Defense at Fort Meade, in Maryland.

Faster ways to clear faults and improve cybersecurity

Schweitzer continues to invent ways of protecting and controlling electric power systems. In 2016 his company released the SEL-T400L, which samples a power system every microsecond to detect the time between traveling waves moving at the speed of light. The idea is to quickly detect and locate transmission line faults.

The relay decides whether to trip a circuit or take other actions in 1 to 2 milliseconds. Previously, it would take a protective relay on the order of 16 ms. A typical circuit breaker takes 30 to 40 ms in high-voltage AC circuits to trip.

“The inventors of the microprocessor had no idea about all the kinds of things people would use it for. It is amazing.”

“I like to talk about the need for speed,” Schweitzer says. “In this day and age, there’s no reason to wait to clear a fault. Faster tripping is a tremendous opportunity from a point of view of voltage and angle stability, safety, reducing fire risk, and damage to electrical equipment.

“We are also going to be able to get a lot more out of the existing infrastructure by tripping faster. For every millisecond in clearing time saved, the transmission system stability limits go up by 15 megawatts. That’s about one feeder per millisecond. So, if we save 12 ms, all of the sudden we are able to serve 12 more distribution feeders from one part of one transmission system.”

The time-domain technology also will find applications in transformer and distribution protection schemes, he says, as well as have a significant impact on DC transmission.

What excites Schweitzer today, he says, is the concept of energy packets, which he and SEL have been working on. The packets measure energy exchange for all signals including distorted AC systems or DC networks.

“Energy packets precisely measure energy transfer, independent of frequency or phase angle, and update at a fixed rate with a common time reference such as every millisecond,” he says. “Time-domain energy packets provide an opportunity to speed up control systems and accurately measure energy on distorted systems—which challenges traditional frequency-domain calculation methods.”

He also is focusing on improving the reliability of critical infrastructure networks by improving cybersecurity, situational awareness, and performance. Plug-and-play and best-effort networking aren’t safe enough for critical infrastructure, he says.

SEL OT SDN technology solves some significant cybersecurity problems,” he says, “and frankly, it makes me feel comfortable for the first time with using Ethernet in a substation.”

From engineering professor to inventor

Schweitzer didn’t start off planning to launch his own company. He began a successful career in academia in 1977 after joining the electrical engineering faculty at Ohio University, in Athens. Two years later, he moved to Pullman, Wash., where he taught at Washington State’s Voiland College of Engineering and Architecture for the next six years. It was only after sales of the SEL-21 took off that he decided to devote himself to his startup full time.

It’s little surprise that Schweitzer became an inventor and started his own company, as his father and grandfather were inventors and entrepreneurs.

His grandfather, Edmund O. Schweitzer, who held 87 patents, invented the first reliable high-voltage fuse in collaboration with Nicholas J. Conrad in 1911, the year the two founded Schweitzer and Conrad—today known as S&C Electric Co.—in Chicago.

Schweitzer’s father, Edmund O. Schweitzer Jr., had 208 patents. He invented several line-powered fault-indicating devices, and he founded the E.O. Schweitzer Manufacturing Co. in 1949. It is now part of SEL.

Schweitzer says a friend gave him the best financial advice he ever got about starting a business: Save your money.

“I am so proud that our 6,000-plus-person company is 100 percent employee-owned,” Schweitzer says. “We want to invest in the future, so we reinvest our savings into growth.”

He advises those who are planning to start a business to focus on their customers and create value for them.

“Unleash your creativity,” he says, “and get engaged with customers. Also, figure out how to contribute to society and make the world a better place.”

27 May. 2023


As I read the newest papers about DNA-based computing, I had to confront a rather unpleasant truth. Despite being a geneticist who also majored in computer science, I was struggling to bridge two concepts—the universal Turing machine, the very essence of computing, and the von Neumann architecture, the basis of most modern CPUs. I had written C++ code to emulate the machine described in Turing’s 1936 paper, and could use it to decide, say, if a word was a palindrome. But I couldn’t see how such a machine—with its one-dimensional tape memory and ability to look at only one symbol on that tape at a time—could behave like a billion-transistor processor with hardware features such as an arithmetic logic unit (ALU), program counter, and instruction register.

I scoured old textbooks and watched online lectures about theoretical computer science, but my knowledge didn’t advance. I decided I would build a physical Turing machine that could execute code written for a real processor.

Rather than a billion-transistor behemoth, I thought I’d target the humble 8-bit 6502 microprocessor. This legendary chip powered the computers I used in my youth. And as a final proof, my simulated processor would have to run Pac-Man, specifically the version of the game written for the Apple II computer.

In Turing’s paper, his eponymous machine is an abstract concept with infinite memory. Infinite memory isn’t possible in reality, but physical Turing machines can be built with enough memory for the task at hand. The hardware implementation of a Turing machine can be organized around a rule book and a notepad. Indeed, when we do basic arithmetic, we use a rule book in our head (such as knowing when to carry a 1). We manipulate numbers and other symbols using these rules, stepping through the process for, say, long division. There are key differences, though. We can move all over a two-dimensional notepad, doing a scratch calculation in the margin before returning to the main problem. With a Turing machine we can only move left or right on a one-dimensional notepad, reading or writing one symbol at a time.

A key revelation for me was that the internal registers of the 6502 could be duplicated sequentially on the one-dimensional notepad using four symbols—0, 1, _ (or space), and $. The symbols 0 and 1 are used to store the actual binary data that would sit in a 6502’s register. The $ symbol is used to delineate different registers, and the _ symbol acts as a marker, making it easy to return to a spot in memory we’re working with. The main memory of the Apple II is emulated in a similar fashion.

: A printed circuit board and the chips and capacitors used to populate the board. Apart from some flip-flops, a couple of NOT gates, and an up-down counter, the PureTuring machine uses only RAM and ROM chips—there are no logic chips. An Arduino board [bottom] monitors the RAM to extract display data. James Provost

Programming a CPU is all about manipulating the registers and transferring their contents to and from main memory using an instruction set. I could emulate the 6502’s instructions as chains of rules that acted on the registers, symbol by symbol. The rules are stored in a programmable ROM, with the output of one rule dictating the next rule to be used, what should be written on the notepad (implemented as a RAM chip), and whether we should read the next symbol or the previous one.

I dubbed my machine PureTuring. The ROM’s data outputs are connected to set of flip-flops. Some of the flip-flops are connected to the RAM, to allow the next or previous symbol to be fetched. Others are connected to the ROM’s own address lines in a feedback loop that selects the next rule.

It turned out to be more efficient to interleave the bits of some registers rather than leaving them as separate 8-bit chunks. Creating the rule book to implement the 6502’s instruction set required 9,000 rules. Of these, 2,500 were created using an old-school method of writing them on index cards, and the rest were generated by a script. Putting this together took about six months.

A diagram showing processor registers interleaved along a 1-D \u201ctape\u201d Only some of the 6502 registers are exposed to programmers [green]; its internal, hidden registers [purple] are used to execute instructions. Below each register a how the registers are arranged, and sometime interleaved, on the PureTuring’s “tape.”James Provost

To fetch a software instruction, PureTuring steps through the notepad using $ symbols as landmarks until it gets to the memory location pointed to by the program counter. The 6502 opcodes are one byte long, so by the time the eighth bit is read, PureTuring is in one of 256 states. Then PureTuring returns to the instruction register and writes the opcode there, before moving on to perform the instruction. A single instruction can take up to 3 million PureTuring clock cycles to fetch, versus one to six cycles for the actual 6502!

The 6502 uses a memory-mapped input/output system. This means that devices such as displays are represented as locations somewhere within main memory. By using an Arduino to monitor the part of the notepad that corresponds to the Apple II’s graphics memory, I could extract pixels and show them on an attached terminal or screen. This required writing a “dewozzing” function for the Arduino as the Apple II’s pixel data is laid out in a complex scheme. ( Steve Wozniak created this scheme to enable the Apple II to fake an analog color TV signal with digital chips and keep the dynamic RAM refreshed.)

I could have inserted input from a keyboard into the notepad in a similar fashion, but I didn’t bother because actually playing Pac-Man on the PureTuring would require extraordinary patience: It took about 60 hours just to draw one frame’s worth of movement for the Pac-Man character and the pursuing enemy ghosts. A modification that moved the machine along the continuum toward a von Neumann architecture added circuitry to permit random access to a notepad symbol, making it unnecessary to step through all prior symbols. This adjustment cut the time to draw the game characters to a mere 20 seconds per frame!

Looking forward, features can be added one by one, moving piecemeal from a Turing machine to a von Neumann architecture: Widen the bus to read eight symbols at a time instead of one, replace the registers in the notepad with hardware registers, add an ALU, and so on.

Now when I read papers and articles on DNA-based computing, I can trace each element back to something in a Turing machine or forward to a conventional architecture, running my own little mental machine along a conceptual tape!

26 May. 2023


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2023: 29 May–2 June 2023, LONDON
Energy Drone & Robotics Summit: 10–12 June 2023, HOUSTON
RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCE
RSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREA
IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREA
IROS 2023: 1–5 October 2023, DETROIT
CLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL
Humanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

We’ve just relaunched the IEEE Robots Guide over at RobotsGuide.com, featuring new robots, new interactives, and a complete redesign from the ground up. Tell your friends, tell your family, and explore nearly 250 robots in pictures and videos and detailed facts and specs, with lots more on the way!

[Robots Guide]

The qualities that make a knitted sweater comfortable and easy to wear are the same things that might allow robots to better interact with humans. RobotSweater, developed by a research team from Carnegie Mellon University’s Robotics Institute, is a machine-knitted textile “skin” that can sense contact and pressure.

RobotSweater’s knitted fabric consists of two layers of conductive yarn made with metallic fibers to conduct electricity. Sandwiched between the two is a net-like, lace-patterned layer. When pressure is applied to the fabric—say, from someone touching it—the conductive yarn closes a circuit and is read by the sensors. In their research, the team demonstrated that pushing on a companion robot outfitted in RobotSweater told it which way to move or what direction to turn its head. When used on a robot arm, RobotSweater allowed a push from a person’s hand to guide the arm’s movement, while grabbing the arm told it to open or close its gripper. In future research, the team wants to explore how to program reactions from the swipe or pinching motions used on a touchscreen.

[CMU]

DEEP Robotics Co. yesterday announced that it has launched the latest version of its Lite3 robotic dog in Europe. The system combines advanced mobility and an open modular structure to serve the education, research, and entertainment markets, said the Hangzhou, China–based company.

Lite3’s announced price is US $2,900. It ships in September.

[Deep Robotics]

Estimating terrain traversability in off-road environments requires reasoning about complex interaction dynamics between the robot and these terrains. We propose a method that learns to predict traversability costmaps by combining exteroceptive environmental information with proprioceptive terrain interaction feedback in a self-supervised manner. We validate our method in multiple short- and large-scale navigation tasks on a large, autonomous all-terrain vehicle (ATV) on challenging off-road terrains, and demonstrate ease of integration on a separate large ground robot.

This work will be presented at the IEEE International Conference on Robotics and Automation (ICRA 2023) in London next week.

[Mateo Guaman Castro]

Thanks, Mateo!

Sheet Metal Workers’ Local Union 104 has introduced a training course on automating and innovating field layout with the Dusty Robotics FieldPrinter system.

[Dusty Robotics]

Apptronik has half of its general-purpose robot ready to go!

The other half is still a work in progress, but here’s progress:

[Apptronik]

A spotted-lanternfly-murdering robot is my kind of murdering robot.

[FRC]

ANYmal is rated IP67 for water resistance, but this still terrifies me.

[ANYbotics]

Check out the impressive ankle action on this humanoid walking over squishy terrain.

[CNRS-AIST JRL]

Wing’s progress can be charted along the increasingly dense environments in which we’ve been able to operate: from rural farms to lightly populated suburbs to more dense suburbs to large metropolitan areas like Brisbane, Australia; Helsinki, Finland; and the Dallas Fort Worth metro area in Texas. Earlier this month, we did a demonstration delivery at Coors Field–home of the Colorado Rockies–delivering beer (Coors of course) and peanuts to the field. Admittedly, it wasn’t on a game day, but there were 1,000 people in the stands enjoying the kickoff party for AUVSI’s annual autonomous systems conference.

[ Wing ]

Pollen Robotics’ team will be going to ICRA 2023 in London! Come and meet us there to try teleoperating Reachy by yourself and give us your feedback!

[ Pollen Robotics ]

The most efficient drone engine is no engine at all.

[ MAVLab ]

Is your robot spineless? Should it be? Let’s find out.

[ UPenn ]

Looks like we’re getting closer to that robot butler.

[ Prisma Lab ]

This episode of the Robot Brains podcast features Raff D’Andrea, from Kiva, Verity, and ETH Zurich.

[ Robot Brains ]

25 May. 2023


Calling all robot fanatics! We are the creators of the Robots Guide, IEEE’s interactive site about robotics, and we need your help.

Today, we’re expanding our massive catalog to nearly 250 robots, and we want your opinion to decide which are the coolest, most wanted, and also creepiest robots out there.

To submit your votes, find robots on the site that are interesting to you and rate them based on their design and capabilities. Every Friday, we’ll crunch the votes to update our Robot Rankings.

Screenshot of Robots Guide site showing the robot ratings module, with overall rating, want this robot rating, and appearance rating. Rate this robot: For each robot on the site, you can submit your overall rating, answer if you’d want to have this robot, and rate its appearance.IEEE Spectrum

May the coolest (or creepiest) robot win!

Our collection currently features 242 robots, including humanoids, drones, social robots, underwater vehicles, exoskeletons, self-driving cars, and more.

Screenshot of Robots Guide showing the Robot Rankings page with three rankings, Top Rated, Most Wanted, and Creepiest. The Robots Guide features three rankings: Top Rated, Most Wanted, and Creepiest.IEEE Spectrum

You can explore the collection by filtering robots by category, capability, and country, or sorting them by name, year, or size. And you can also search robots by keywords.

In particular, check out some of the new additions, which could use more votes. These include some really cool robots like LOVOT, Ingenuity, GITAI G1, Tertill, Salto, Proteus, and SlothBot.

Each robot profile includes detailed tech specs, photos, videos, history, and some also have interactives that let you move and spin robots 360 degrees on the screen.

And note that these are all real-world robots. If you’re looking for sci-fi robots, check out our new Face-Off: Sci-Fi Robots game.

Robots Redesign

Today, we’re also relaunching the Robots Guide site with a fast and sleek new design, more sections and games, and thousands of photos and videos.

The new site was designed by Pentagram, the prestigious design consultancy, in collaboration with Standard, a design and technology studio.


The site is built as a modern, fully responsive Web app. It’s powered by Remix.run, a React-based Web framework, with structured content by Sanity.io and site search by Algolia.

More highlights:

  • Explore nearly 250 robots
  • Make robots move and spin 360 degrees
  • View over 1,000 amazing photos
  • Watch 900 videos of robots in action
  • Play the Sci-Fi Robots Face-Off game
  • Keep up to date with daily robot news
  • Read detailed tech specs about each robot
  • Robot Rankings: Top Rated, Most Wanted, Creepiest

The Robots Guide was designed for anyone interested in learning more about robotics, including robot enthusiasts, both experts and beginners, researchers, entrepreneurs, STEM educators, teachers, and students.

The foundation for the Robots Guide is IEEE’s Robots App, which was downloaded 1.3 million times and is used in classrooms and STEM programs all over the world.

The Robots Guide is an editorial product of IEEE Spectrum, the world’s leading technology and engineering magazine and the flagship publication of the IEEE. Thank you to the IEEE Foundation and our sponsors for their support, which enables all of the Robots Guide content to be open and free to everyone.

25 May. 2023


The most advanced manufacturers of computer processors are in the middle of the first big change in device architecture in a decade—the shift from finFETs to nanosheets. Another 10 years should bring about another fundamental change, where nanosheet devices are stacked atop each other to form complementary FETs (CFETs), capable of cutting the size of some circuits in half. But the latter move is likely to be a heavy lift, say experts. An in-between transistor called the forksheet might keep circuits shrinking without quite as much work.

The idea for the forksheet came from exploring the limits of the nanosheet architecture, says Julien Ryckaert, the vice president for logic technologies at Imec. The nanosheet’s main feature is its horizontal stacks of silicon ribbons surrounded by its current-controlling gate. Although nanosheets only recently entered production, experts were already looking for their limits years ago. Imec was tasked with figuring out “at what point nanosheet will start tanking,” he says.

Ryckaert’s team found that one of the main limitations to shrinking nanosheet-based logic is keeping the separation between the two types of transistor that make up CMOS logic. The two types—NMOS and PMOS—must maintain a certain distance to limit capacitance that saps the devices’ performance and power consumption. “The forksheet is a way to break that limitation,” Ryckaert says.

Instead of individual nanosheet devices, the forksheet scheme builds them as pairs on either side of a dielectric wall. (No, it doesn’t really resemble a fork much.) The wall allows the devices to be placed closer together without causing a capacitance problem, says Naoto Horiguchi, the director of CMOS technology at Imec. Designers could use the extra space to shrink logic cells, or they could use the extra room to build transistors with wider sheets leading to better performance, he says.

Four multicolored blocks with arrows between them indicating a progression. Leading-edge transistors are already transitioning from the fin field-effect transistor (FinFET) architecture to nanosheets. The ultimate goal is to stack two devices atop each other in a CFET configuration. The forksheet may be an intermediary step on the way.Imec

“CFET is probably the ultimate CMOS architecture,” says Horiguchi of the device that Imec expects to reach production readiness around 2032. But he adds that CFET “integration is very complex.” Forksheet reuses most of the nanosheet production steps, potentially making it an easier job, he says. Imec predicts it could be ready around 2028.

There are still many hurdles to leap over, however. “It’s more complex than initially thought,” Horiguchi says. From a manufacturing perspective, the dielectric wall is a bit of a headache. There are several types of dielectric used in advanced CMOS and several steps that involve etching it away. Making forksheets means etching those others without accidentally attacking the wall. And it’s still an open question which types of transistor should go on either side of the wall, Horiguchi says. The initial idea was to put PMOS on one side and NMOS on the other, but there may be advantages to putting the same type on both sides instead.

24 May. 2023


There was a time, decades really, when all it took to make a better computer chip were smaller transistors and narrower interconnects. That time’s long gone now, and although transistors will continue to get a bit smaller, simply making them so is no longer the point. The only way to keep up the exponential pace of computing now is a scheme called system technology co-optimization, or STCO, argued researchers at ITF World 2023 last week in Antwerp, Belgium. It’s the ability to break chips up into their functional components, use the optimal transistor and interconnect technology for each function, and stitch them back together to create a lower-power, better-functioning whole.

“This leads us to a new paradigm for CMOS,” says Imec R&D manager Marie Garcia Bardon. CMOS 2.0, as the Belgium-based nanotech research organization is calling it, is a complicated vision. But it may be the most practical way forward, and parts of it are already evident in today’s most advanced chips.

How we got here

In a sense, the semiconductor industry was spoiled by the decades prior to about 2005, says Julien Ryckaert, R&D vice president at Imec. During that time, chemists and device physicists were able to regularly produce a smaller, lower-power, faster transistor that could be used for every function on a chip and that would lead to a steady increase in computing capability. But the wheels began to come off that scheme not long thereafter. Device specialists could come up with excellent new transistors, but those transistors weren’t making better, smaller circuits, such as the SRAM memory and standard logic cells that make up the bulk of CPUs. In response, chipmakers began to break down the barriers between standard cell design and transistor development. Called design technology co-optimization, or DTCO, the new scheme led to devices designed specifically to make better standard cells and memory.

But DTCO isn’t enough to keep computing going. The limits of physics and economic realities conspired to put barriers in the path to progressing with a one-size-fits-all transistor. For example, physical limits have prevented CMOS operating voltages from decreasing below about 0.7 volts, slowing down progress in power consumption, explains Anabela Veloso, principal engineer at Imec. Moving to multicore processors helped ameliorate that issue for a time. Meanwhile, input-output limits meant it became more and more necessary to integrate the functions of multiple chips onto the processor. So in addition to a system-on-chip (SoC) having multiple instances of processor cores, they also integrate network, memory, and often specialized signal-processing cores. Not only do these cores and functions have different power and other needs, they also can’t be made smaller at the same rate. Even the CPU’s cache memory, SRAM, isn’t scaling down as quickly as the processor’s logic.

System technology co-optimization

Getting things unstuck is as much a philosophical shift as a collection of technologies. According to Ryckaert, STCO means looking at a system-on-chip as a collection of functions, such as power supply, I/O, and cache memory. “When you start reasoning about functions, you realize that an SoC is not this homogeneous system, just transistors and interconnect,” he says. “It is functions, which are optimized for different purposes.”

Ideally, you could build each function using the process technology best suited to it. In practice, that mostly means building each on its own sliver of silicon, or chiplet. Then you would bind those together using technology, such as advanced 3D stacking, so that all the functions act as if they were on the same piece of silicon.

Examples of this thinking are already present in advanced processors and AI accelerators. Intel’s high-performance computing accelerator Ponte Vecchio (now called Intel Data Center GPU Max) is made up of 47 chiplets built using two different processes, each from both Intel and Taiwan Semiconductor Manufacturing Co. AMD already uses different technologies for the I/O chiplet and compute chiplets in its CPUs, and it recently began separating out SRAM for the compute chiplet’s high-level cache memory.

Imec’s road map to CMOS 2.0 goes even further. The plan requires continuing to shrink transistors, moving power and possibly clock signals beneath a CPU’s silicon, and ever-tighter 3D-chip integration. “We can use those technologies to recognize the different functions, to disintegrate the SoC, and reintegrate it to be very efficient,” says Ryckaert.

For rows of progressing letters, numbers, and block diagrams. Transistors will change form over the coming decade, but so will the metal that connects them. Ultimately, transistors could be stacked-up devices made of 2D semiconductors instead of silicon. Power delivery and other infrastructure could be layered beneath the transistors.Imec

Continued transistor scaling

Major chipmakers are already transitioning from the FinFET transistors that powered the last decade of computers and smartphones to a new architecture, nanosheet transistors [see “The Nanosheet Transistor Is the Next (and Maybe Last) Step in Moore’s Law”]. Ultimately, two nanosheet transistors will be built atop each other to form the complementary FET, or CFET, which Velloso says “represents the ultimate in CMOS scaling” [see “3D-Stacked CMOS Takes Moore’s Law to New Heights”].

As these devices scale down and change shape, one of the main goals is to drive down the size of standard logic cells. That is typically measured in “track height”—basically, the number of metal interconnect lines that can fit within the cell. Advanced FinFETs and early nanosheet devices are six-track cells. Moving to five tracks may require an interstitial design called a forksheet, which squeezes devices together more closely without necessarily making them smaller. CFETs will then reduce cells to four tracks or possibly fewer.

Four multicolored blocks with arrows between them indicating a progression. Leading-edge transistors are already transitioning from the fin field-effect transistor (FinFET) architecture to nanosheets. The ultimate goal is to stack two devices atop each other in a CFET configuration. The forksheet may be an intermediary step on the way.Imec

According to Imec, chipmakers will be able to produce the finer features needed for this progression using ASML’s next generation of extreme-ultraviolet lithography. That tech, called high-numerical-aperture EUV, is under construction at ASML now, and Imec is next in line for delivery. Increasing numerical aperture, an optics term related to the range of angles over which a system can gather light, leads to more precise images.

Backside power-delivery networks

The basic idea in backside power-delivery networks is to remove all the interconnects that send power—as opposed to data signals—from above the silicon surface and place them below it. This should allow for less power loss, because the power delivering interconnects can be larger and less resistant. It also frees up room above the transistor layer for signal-carrying interconnects, possibly leading to more compact designs [see “Next-Gen Chips Will Be Powered From Below”].

In the future, even more could be moved to the backside of the silicon. For example, so-called global interconnects—those that span (relatively) great distances to carry clock and other signals—could go beneath the silicon. Or engineers could add active power-delivery devices, such as electrostatic discharge safety diodes.

3D integration

There are several ways to do 3D integration, but the most advanced today are wafer-to-wafer and die-to-wafer hybrid bonding [see “3 Ways 3D Chip Tech Is Upending Computing”]. These two provide the highest density of interconnections between two silicon dies. But this method requires that the two dies are designed together, so their functions and interconnect points align, allowing them to act as a single chip, says Anne Jourdain, principal member of the technical staff. Imec R&D is on track to be able to produce millions of 3D connections per square millimeter in the near future.

Getting to CMOS 2.0

CMOS 2.0 would take disaggregation and heterogeneous integration to the extreme. Depending on which technologies make sense for the particular applications, it could result in a 3D system that incorporates layers of embedded memory, I/O and power infrastructure, high-density logic, high drive-current logic, and huge amounts of cache memory.

Getting to that point will take not just technology development but also the tools and training to discern which technologies would actually improve a system. As Bardon points out, smartphones, servers, machine-learning accelerators, and augmented- and virtual-reality systems all have very different requirements and constraints. What makes sense for one might be a dead end for the other.

23 May. 2023


Stephen Cass: Welcome to Fixing the Future, an IEEE Spectrum podcast. This episode is brought to you by IEEE Xplore, the digital library with over 6 million technical documents and free search. I’m senior editor Stephen Cass, and today I’m talking with a former Spectrum editor, Sally Adee, about her new book, We Are Electric: The New Science of Our Body’s Electrome. Sally, welcome to the show.

Sally Adee: Hi, Stephen. Thank you so much for having me.

Cass: It’s great to see you again, but before we get into exactly what you mean by the body’s electrome and so on, I see that in researching this book, you actually got yourself zapped quite a bit in a number of different ways. So I guess my first question is: are you okay?

Adee: I mean, as okay as I can imagine being. Unfortunately, there’s no experimental sort of condition and control condition. I can’t see the self I would have been in the multiverse version of myself that didn’t zap themselves. So I think I’m saying yes.

Cass: The first question I have then is what is an electrome?

Adee: So the electrome is this word, I think, that’s been burbling around the bioelectricity community for a number of years. The first time it was committed to print is a 2016 paper by this guy called Arnold De Loof, a researcher out in Europe. But before that, a number of the researchers I spoke to for this book told me that they had started to see it in papers that they were reviewing. And I think it wasn’t sort of defined consistently always because there’s this idea that seems to be sort of bubbling to the top, bubbling to the surface, that there are these electrical properties that the body has, and they’re not just epiphenomena, and they’re not just in the nervous system. They’re not just action potentials, but that there are electrical properties in every one of our cells, but also at the organ level, potentially at the sort of entire system level, that people are trying to figure out what they actually do.

And just as action potentials aren’t just epiphenomena, but actually our control mechanisms, they’re looking at how these electrical properties work in the rest of the body, like in the cells, membrane voltages and skin cells, for example, are involved in wound healing. And there’s this idea that maybe these are an epigenetic variable that we haven’t been able to conscript yet. And there’s such promise in it, but a lot of the research, the problem is that a lot of the research is being done across really far-flung scientific communities, some in developmental biology, some of it in oncology, a lot of it in neuroscience, obviously. But what this whole idea of the electrome is— I was trying to pull this all together because the idea behind the book is I really want people to just develop this umbrella of bioelectricity, call it the electrome, call it bioelectricity, but I kind of want the word electrome to do for bioelectricity research what the word genome did for molecular biology. So that’s basically the spiel.

Cass: So I want to surf back to a couple points you raised there, but first off, just for people who might not know, what is an action potential?

Adee: So the action potential is the electrical mechanism by which the nervous signal travels, either to actuate motion at the behest of your intent or to gain sensation and sort of perceive the world around you. And that’s the electrical part of the electrochemical nervous impulse. So everybody knows about neurotransmitters at the synapse and— well, not everybody, but probably Spectrum listeners. They know about the serotonin that’s released and all these other little guys. But the thing is you wouldn’t be able to have that release without the movement of charged particles called ions in and out of the nerve cell that actually send this impulse down and allow it to travel at a rate of speed that’s fast enough to let you yank your hand away from a hot stove when you’ve touched it, before you even sort of perceive that you did so.

Cass: So that actually brings me to my next question. So you may remember in some Spectrum‘s editorial meetings when we were deciding if a tech story was for us or not, that literally, we would often ask, “Where is the moving electron? Where is the moving electron?” But bioelectricity is not really based on moving electrons. It’s based on these ions.

Yeah. So let’s take the neuron as an example. So what you’ve got is— let me do like a— imagine a spherical cow for a neuron, okay? So you’ve got a blob and it’s a membrane, and that separates the inside of your cell from the outside of your cell. And this membrane is studded with tens of thousands, I think, little pores called ion channels. And the pores are not just sieve pores. They’re not inert. They’re really smart. And they decide which ions they like. Now, let’s go to the ions. Ions are suffusing your extracellular fluid, all the stuff that bathes you. It’s basically the reason they say you’re 66 percent water or whatever. This is like sieve water. It’s got sodium, potassium, calcium, etc., and these ions are charged particles.

So when you’ve got a cell, it likes potassium, the neuron, it likes potassium, it lets it in. It doesn’t really like sodium so much. It’s got very strong preferences. So in its resting state, which is its happy place, those channels allow potassium ions to enter. And those are probably where the electrons are, actually, because an ion, it’s got a plus-one charge or a minus-one charge based on— but let’s not go too far into it. But basically, the cell allows the potassium to come inside, and its resting state, which is its happy place, the separation of the potassium from the sodium causes, for all sorts of complicated reasons, a charge inside the cell that is minus 70 degree— sorry, minus 70 millivolts with respect to the extracellular fluid.

Cass: Before I read your book, I kind of had the idea that how neurons use electricity was, essentially, settled science, very well understood, all kind of squared away, and this was how the body used electricity. But even when it came to neurons, there’s a lot of fundamentals, kind of basic things about how neurons use electricity that we really only established relatively recently. Some of the research you’re talking about is definitely not a century-old kind of basic science about how these things work.

Adee: No, not at all. In fact, there was a paper released in 2018 that I didn’t include, which I’m really annoyed by. I just found it recently. Obviously, you can’t find all the papers. But it’s super interesting because it blends that whole sort of ionic basis of the action potential with another thing in my book that’s about how cell development is a little bit like a battery getting charged. Do you know how cells assume an electrical identity that may actually be in charge of the cell fate that they meet? And so we know abou— sorry, the book goes into more detail, but it’s like when a cell is stem or a fertilized egg, it’s depolarized. It’s at zero. And then when it becomes a nerve cell, it goes to that minus 70 that I was talking about before. If it becomes a fat cell, it’s at minus 50. If it’s musculoskeletal tissue, it goes to minus 90. Liver cells are like around minus 40. And so you’ve got real identitarian diversity, electrical diversity in your tissues, which has something to do with what they end up doing in the society of cells. So this paper that I was talking about, the 2018 paper, they actually looked at neurons. This was work from Denis Jabaudon at the University of Geneva, and they were looking at how neurons actually differentiate. Because when baby neurons are born-- your brain is made of all kinds of cells. It’s not just cortical cells. There’s staggering variety of classes of neurons. And as cells actually differentiate, you can watch their voltage change, just like you can do in the rest of the body with these electrosensitive dyes. So that’s an aspect of the brain that we hadn’t even realized until 2018.

Cass: And that all leads me to my next point, which is if you think bioelectricity, we think, okay, nerves zapping around. But neurons are not the only bioelectric network in the body. So talk about some of the other sorts of electrical networks we have, completely, or are largely separate from our neural networks?

Adee: Well, so Michael Levin is a professor at Tufts University. He does all kinds of other stuff, but mainly, I guess, he’s like the Paul Erdos of bioelectricity, I like to call him, because he’s sort of the central node. He’s networked into everybody, and I think he’s really trying to, again, also assemble this umbrella of bioelectricity to study this all in the aggregate. So his idea is that we are really committed to this idea of bioelectricity being in charge of our sort of central communications network, the way that we understand the environment around us and the way that we understand our ability to move and feel within it. But he thinks that bioelectricity is also how— that the nervous system kind of hijacked this mechanism, which is way older than any nervous system. And he thinks that we have another underlying network that is about our shape, and that this is bioelectrically mediated in really important ways, which impacts development, of course, but also wound healing. Because if you think about the idea that your body understands its own shape, what happens when you get a cut? How does it heal it? It has to go back to some sort of memory of what its shape is in order to heal it over. In animals that regenerate, they have a completely different electrical profile after they’ve been—so after they’ve had an arm chopped off.

So it’s a very different electrical— yeah, it’s a different electrical process that allows a starfish to regrow a limb than the one that allows us to scar over. So you’ve got this thing called a wound current. Your skin cells are arranged in this real tight wall, like little soldiers, basically. And what’s important is that they’re polarized in such a way that if you cut your skin, all the sort of ions flow out in a certain way, which creates this wound current, which then generates an electric field, and the electric field acts like a beacon. It’s like a bat signal, right? And it guides in these little helper cells, the macrophages that come and gobble up the mess and the keratinocytes and the guys who build it back up again and scar you over. And it starts out strong, and as you scar over, as the wound heals, it very slowly goes away. By the time the wound is healed, there’s no more field. And what was super interesting is this guy, Richard Nuccitelli, invented this thing called the Dermacorder that’s able to sense and evaluate the electric field. And he found that in people over the age of 65, the wound field is less than half of what it is in people under 25. And that actually goes in line with another weird thing about us, which is that our bioelectricity— or sorry, our regeneration capabilities are time-dependent and tissue-dependent.

So you probably know that the intestinal tissue regenerates all the time. You’re going to digest next week’s food with totally different cells than this morning’s food. But also, we’re time-dependent because when we’re just two cells, if you cleave that in half, you get identical twins. Later on during fetal development, it’s totally scarless, which is something we found out, because when we started being able to do fetal surgery in the womb, it was determined that we heal, basically, scarlessly. Then we’re born, and then between the ages of 7 and 11— until we are between the ages of 7 and 11, you chop off a fingertip, it regenerates perfectly, including the nail, but we lose that ability. And so it seems like the older we get, the less we regenerate. And so they’re trying to figure out now how— various programs are trying to figure out how to try to take control of various aspects of our sort of bioelectrical systems to do things like radically accelerate healing, for example, or how to possibly re-engage the body’s developmental processes in order to regenerate preposterous things like a limb. I mean, it sounds preposterous now. Maybe in 20 years, it’ll just be.

Cass: I want to get into some of the technologies that people are thinking of building on this sort of new science. Part of it is that the history of this field, both scientifically and technologically, has really been plagued by the shadow of quackery. And can you talk a little bit about this and how, on the one hand, there’s been some things we’re very glad that we stopped doing some very bad ideas, but it’s also had this shadow on sort of current research and trying to get real therapies to patients?

Adee: Yeah, absolutely. That was actually one of my favorite chapters to write, was the spectacular pseudoscience one, because, I mean, that is so much fun. So it can be boiled down to the fact that we were trigger happy because we see this electricity, we’re super excited about it. We start developing early tools to start manipulating it in the 1700s. And straight away, it’s like, this is an amazing new tool, and there’s all these sort of folk cures out there that we then decide that we’re going to take— not into the clinic. I don’t know what you’d call it, but people just start dispensing this stuff. This is separate from the discovery of endogenous electrical activity, which is what Luigi Galvani famously discovered in the late 1700s. He starts doing this. He’s an anatomist. He’s not an electrician. Electrician, by the way, is what they used to call the sort of literati who were in charge of discovery around electricity. And it had a really different connotation at the time, that they were kind of like the rocket scientists of their day.

But Galvani’s just an anatomist, and he starts doing all of these experiments using these new tools to zap frogs in various ways and permutations. And he decides that he has answered a whole different old question, which is how does man’s will animate his hands and let him feel the world around him? And he says, “This is electrical in nature.” This is a long-standing mystery. People have been bashing their heads against it for the past 100, 200 years. But he says that this is electrical, and there’s a big, long fight. I won’t get into too much between Volta, the guy who invented the battery, and Galvani. Volta says, “No, this is not electrical.” Galvani says, “Yes, it is.” But owing to events, when Volta invents the battery, he basically wins the argument, not because Galvani was wrong, but because Volta had created something useful. He had created a tool that people could use to advance the study of all kinds of things. Galvani’s idea that we have an endogenous electrical sort of impulse, it didn’t lead to anything that anybody could use because we didn’t have tools sensitive enough to really measure it. We only sort of had indirect measurements of it.

And his nephew, after he dies in ignominy, his nephew decides to bring it on himself to rescue, single-handedly, his uncle’s reputation. The problem is, the way he does it is with a series of grotesque, spectacular experiments. He very famously reanimated— well, zapped until they shivered, the corpses of all these dead guys, dead criminals, and he was doing really intense things like sticking electrodes connected to huge voltaic piles, Proto batteries, into the rectums of dead prisoners, which would make them sit up halfway and point at the people who are assembled, this very titillating stuff. Many celebrities of the time would crowd around these demonstrations.

Anyway, so Galvani basically—or sorry, Aldini, the nephew, basically just opens the door to everyone to be like, “Look what we can do with electricity.” Then in short order, there’s a guy who creates something called the Celestial Bed, which is a thing— they’ve got rings, they’ve got electric belts for stimulating the nethers. The Celestial Bed is supposed to help infertile couples. This is how sort of just wild electricity is in those days. It’s kind of like— you know how everybody went crazy for crypto scams last year? Electricity was like the crypto of 1828 or whatever, 1830s. And the Celestial Bed, so people would come and they would pay £9,000 to spend a night in it, right? Well, not at the time. That’s in today’s money. And it didn’t even use electricity. It used the idea of electricity. It was homeopathy, but electricity. You don’t even know where to start. So this is the sort of caliber of pseudoscience, and this is really echoed down through the years. That was in the 1800s. But when people submit papers or grant applications, I heard more than one researchers say to me— people would look at this electric stuff, and they’d be like, “Does anyone still believe this shit?” And it’s like, this is rigorous science, but it’s been just tarnished by the association with this.

Cass: So you mentioned wound care, and the book talks about some of the ways [inaudible] would care. But we’re also looking at other really ambitious ideas like regenerating limbs as part of this extension of wound care. And also, you make the point of certainly doing diagnostics and then possibly treatments for things like cancer. In thinking about cancer in a very different way than the really very, very tightly-focused genetic view we have of cancer now, and thinking about it kind of literally in a wider context. So can you talk about that a little bit?

Adee: Sure. And I want to start by saying that I went to a lot of trouble to be really careful in the book. I think cancer is one of those things that— I’ve had cancer in my family, and it’s tough to talk about it because you don’t want to give people the idea that there’s a cure for cancer around the corner when this is basic research and intriguing findings because it’s not fair. And I sort of struggled. I thought for a while, like, “Do I even bring this up?” But the ideas behind it are so intriguing, and if there were more research dollars thrown at it or pounds or whatever, Swiss francs, you might be able to really start moving the needle on some of this stuff. The idea is, there are two electrical— oh God, I don’t want to say avenues, but it is unfortunately what I have to do. There are two electrical avenues to pursue in cancer. The first one is something that a researcher called Mustafa Djamgoz at Imperial College here in the UK, he has been studying this since the ‘90s. Because he used to be a neurobiologist. He was looking at vision. And he was talking to some of his oncologist Friends, and they gave him some cancer cell lines, and he started looking at the behavior of cancer cells, the electrical behavior of cancer cells, and he started finding some really weird behaviors.

Cancer cells that should not have had anything to do with action potentials, like from prostate cancer lines, when he looked at them, they were oscillating like crazy, as if they were nerves. And then he started looking at other kinds of cancer cells, and they were all oscillating, and they were doing this oscillating behavior. So he spent like seven years sort of bashing his head against the wall. Nobody wanted to listen to him. But now, way more people are now investigating this. There’s going to be an ion channel at Cancer Symposium I think later this month, actually, in Italy. And he found, and a lot of other researchers like this woman, Annarosa Arcangeli, they have found that the reason that cancer cells may have these oscillating properties is that this is how they communicate with each other that it’s time to leave the nest of the tumor and start invading and metastasizing. Separately, there have been very intriguing-- this is really early days. It’s only a couple of years that they’ve started noticing this, but there have been a couple of papers now. People who are on certain kinds of ion channel blockers for neurological conditions like epilepsy, for example, they have cancer profiles that are slightly different from normal, which is that if they do get cancer, they are slightly less likely to die of it. In the aggregate. Nobody should be starting to eat ion channel blockers.

But they’re starting to zero in on which particular ion channels might be responsible, and it’s not just one that you and I have. These cancer kinds, they are like a expression of something that normally only exists when we’re developing in the womb. It’s part of the reason that we can grow ourselves so quickly, which of course, makes sense because that’s what cancer does when it metastasizes, it grows really quickly. So there’s a lot of work right now trying to identify how exactly to target these. And it wouldn’t be a cure for cancer. It would be a way to keep a tumor in check. And this is part of a strategy that has been proposed in the UK a little bit for some kinds of cancer, like the triple-negative kind that just keep coming back. Instead of subjecting someone to radiation and chemo, especially when they’re older, sort of just really screwing up their quality of life while possibly not even giving them that much more time. What if instead you sort of tried to treat cancer more like a chronic disease, keep it managed, and maybe that gives a person like 10 or 20 years? That’s a huge amount of time. And while not messing up with their quality of life.

This is a whole conversation that’s being had, but that’s one avenue. And there’s a lot of research going on in this right now that may yield fruit sort of soon. The much more sci-fi version of this, the studies have mainly been done in tadpoles, but they’re so interesting. So Michael Levin, again, and his postdoc at the time, I think, Brook Chernet, they were looking at what happens— so it’s uncontroversial that as a cancer cell-- so let’s go back to that society of cells thing that I was talking about. You get fertilized egg, it’s depolarized, zero, but then its membrane voltage charges, and it becomes a nerve cell or skin cell or a fat cell. What’s super interesting is that when those responsible members of your body’s society decide to abscond and say, “Screw this. I’m not participating in society anymore. I’m just going to eat and grow and become cancer,” their membrane voltage also changes. It goes much closer to zero again, almost like it’s having a midlife crisis or whatever.

So what they found, what Levin and Chernet found is that you can manipulate those cellular electrics to make the cell stop behaving cancerously. And so they did this in tadpoles. They had genetically engineered the tadpoles to express tumors, but when they made sure that the cells could not depolarize, most of those tadpoles did not express the tumors. And when they later took tadpoles that already had the tumors and they repolarized the voltage, those tumors, that tissue started acting like normal tissue, not like cancer tissue. But again, this is the sci-fi stuff, but the fact that it was done at all is so fascinating, again, from that epigenetic sort of body pattern perspective, right?

Cass: So sort of staying with that sci-fi stuff, except this one, even more closer to reality. And this goes back to some of these experiments which you zapped yourself. Can you talk a little bit about some of these sort of device that you can wear which appear to really enhance certain mental abilities? And some of these you [inaudible].

Adee: So the kit that I wore, I actually found out about it while I was at Spectrum, when I was a DARPATech. And this program manager told me about it, and I was really stunned to find out that just by running two milliamps of current through your brain, you would be able to improve your-- well, it’s not that your ability is improved. It was that you could go from novice to expert in half the time that it would take you normally, according to the papers. And so I really wanted to try it. I was trying to actually get an expert feature written for IEEE Spectrum, but they kept ghosting me, and then by the time I got to New Scientist, I was like, fine, I’m just going to do it myself. So they let me come over, and they put this kit on me, and it was this very sort of custom electrodes, these things, they look like big daisies. And this guy had brewed his own electrolyte solution and sort of smashed it onto my head, and it was all very slimy.

So I was doing this video game called DARWARS Ambush!, which is just like a training— it’s a shooter simulation to help you with shooting. So it was a Gonzo stunt. It was not an experiment. But he was trying to replicate the conditions of me not knowing whether the electricity was on as much as he could. So he had it sort of behind my back, and he came in a couple of times and would either pretend to turn it on or whatever. And I was practicing and I was really bad at it. That is not my game. Let’s just put it that way. I prefer driving games. But it was really frustrating as well because I never knew when the electricity was on. So I was just like, “There’s no difference. This sucks. I’m terrible.” And that sort of inner sort of buzz kept getting stronger and stronger because I’d also made bad choices. I’d taken a red-eye flight the night before. And I was like, “Why would I do that? Why wouldn’t I just give myself one extra day to recover before I go in and do this really complicated feature where I have to learn about flow state and electrical stimulation?” And I was just getting really tense and just angrier and angrier. And then at one point, he came in after my, I don’t know, 5th or 6th, I don’t know, 400th horrible attempt where I just got blown up every time. And then he turned on the electricity, and I could totally feel that something had happened because I have a little retainer in my mouth just at the bottom. And I was like, “Whoa.” But then I was just like, “Okay. Well, now this is going to suck extra much because I know the electricity is on, so it’s not even a freaking sham condition.” So I was mad.

But then the thing started again, and all of a sudden, all the sort of buzzing little angry voices just stopped, and it was so profound. And I’ve talked about it quite a bit, but every time I remember it, I get a little chill because it was the first time I’d ever realized, number one, how pissy my inner voices are and just how distracting they are and how abusive they are. And I was like, “You guys suck, all of you.” But somebody had just put a bell jar between me and them, and that feeling of being free from them was profound. At first, I didn’t even notice because I was just busy doing stuff. And all of a sudden, I was amazing at this game and I dispatched all of the enemies and whatnot, and then afterwards, when they came in, I was actually pissed because I was just like, “Oh, now I get it right and you come in after three minutes. But the last times when I was screwing it up, you left me in there to cook for 20 minutes.” And they were like, “No, 20 minutes has gone by,” which I could not believe. But yeah, it was just a really fairly profound experience, which is what led me down this giant rabbit hole in the first place. Because when I wrote the feature afterwards, all of a sudden I started paying attention to the whole TDCS thing, which I hadn’t yet. I had just sort of been focusing [crosstalk].

Cass: And that’s transcranial—?

Adee: Oh sorry, transcranial direct current stimulation.

Cass: There you go. Thank you. Sorry.

Adee: No. Yeah, it’s a mouthful. But then that’s when I started to notice that quackery we were talking about before. All that history was really informing the discussion around it because people were just like, “Oh, sure. Why don’t you zap your brain with some electricity and you become super smart.” And I was like, “Oh, did I like fall for the placebo effect? What happened here?” And there was this big study from Australia where the guy was just like, “When we average out all of the effects of TDCS, we find that it does absolutely nothing.” Other guys stimulated a cadaver to see if it would even reach the brain tissue and included it wouldn’t. But that’s basically what started me researching the book, and I was able to find answers to all those questions. But of course, TDCS, I mean, it’s finicky just like the electrome. It’s like your living bone is conductive. So when you’re trying to put an electric field on your head, basically, you have to account for things like how thick is that person’s skull in the place that you want to stimulate. They’re still working out the parameters.

There have been some really good studies that show sort of under which particular conditions they’ve been able to make it work. It does not work for all conditions for which it is claimed to work. There is some snake oil. There’s a lot left to be done, but a better understanding of how this affects the different layers of the sort of, I guess, call it, electrome, would probably make it something that you could use replicability. Is that a word? But also, that applies to things like deep brain stimulation, which, also, for Parkinson’s, it’s fantastic. But they’re trying to use it for depression, and in some cases, it works so—I want to use a bad word—amazingly. Just Helen Mayberg, who runs these trials, she said that for some people, this is an option of last resort, and then they get the stimulation, and they just get back on the bus. That’s her quote. And it’s like a switch that you flip. And for other people, it doesn’t work at all.

Cass: Well the book is packed with even more fantastic stuff, and I’m sorry we don’t have time to go through it, because literally, I could sit here and talk to you all day about this.

Adee: I didn’t even get into the frog battery, but okay, that’s fine. Fine, fine skip the frog. Sorry, I’m just kidding. I’m kidding, I’m kidding.

Cass: And thank you so much, Sally, for chatting with us today.

Adee: Oh, thank you so much. I really love talking about it, especially with you.

Cass: Today on Fixing the Future, we’re talking with Sally Adee about her new book on the body’s electrome. For IEEE Spectrum I’m Stephen Cass.

23 May. 2023


First-year college students are understandably frustrated when they can’t get into popular upper-level electives. But they usually just gripe. Paras Jha was an exception. Enraged that upper-class students were given priority to enroll in a computer-science elective at Rutgers, the State University of New Jersey, Paras decided to crash the registration website so that no one could enroll.

On Wednesday night, 19 November 2014, at 10:00 p.m. EST—as the registration period for first-year students in spring courses had just opened—Paras launched his first distributed denial-of-service (DDoS) attack. He had assembled an army of some 40,000 bots, primarily in Eastern Europe and China, and unleashed them on the Rutgers central authentication server. The botnet sent thousands of fraudulent requests to authenticate, overloading the server. Paras’s classmates could not get through to register.

The next semester Paras tried again. On 4 March 2015, he sent an email to the campus newspaper, The Daily Targum: “A while back you had an article that talked about the DDoS attacks on Rutgers. I’m the one who attacked the network.… I will be attacking the network once again at 8:15 pm EST.” Paras followed through on his threat, knocking the Rutgers network offline at precisely 8:15 p.m.


Image of a book cover


On 27 March, Paras unleashed another assault on Rutgers. This attack lasted four days and brought campus life to a standstill. Fifty thousand students, faculty, and staff had no computer access from campus.

On 29 April, Paras posted a message on Pastebin, a website popular with hackers for sending anonymous messages. “The Rutgers IT department is a joke,” he taunted. “This is the third time I have launched DDoS attacks against Rutgers, and every single time, the Rutgers infrastructure crumpled like a tin can under the heel of my boot.”

Paras was furious that Rutgers chose Incapsula, a small cybersecurity firm based in Massachusetts, as its DDoS-mitigation provider. He claimed that Rutgers chose the cheapest company. “Just to show you the poor quality of Incapsula’s network, I have gone ahead and decimated the Rutgers network (and parts of Incapsula), in the hopes that you will pick another provider that knows what they are doing.”

Paras’s fourth attack on the Rutgers network, taking place during finals, caused chaos and panic on campus. Paras reveled in his ability to shut down a major state university, but his ultimate objective was to force it to abandon Incapsula. Paras had started his own DDoS-mitigation service, ProTraf Solutions, and wanted Rutgers to pick ProTraf over Incapsula. And he wasn’t going to stop attacking his school until it switched.

A Hacker Forged in Minecraft

Paras Jha was born and raised in Fanwood, a leafy suburb in central New Jersey. When Paras was in the third grade, a teacher recommended that he be evaluated for attention deficit hyperactivity disorder, but his parents didn’t follow through.

As Paras progressed through elementary school, his struggles increased. Because he was so obviously intelligent, his teachers and parents attributed his lackluster performance to laziness and apathy. His perplexed parents pushed him even harder.

Paras sought refuge in computers. He taught himself how to code when he was 12 and was hooked. His parents happily indulged this passion, buying him a computer and providing him with unrestricted Internet access. But their indulgence led Paras to isolate himself further, as he spent all his time coding, gaming, and hanging out with his online friends.

Paras was particularly drawn to the online game Minecraft. In ninth grade, he graduated from playing Minecraft to hosting servers. It was in hosting game servers that he first encountered DDoS attacks.

Minecraft server administrators often hire DDoS services to knock rivals offline. As Paras learned more sophisticated DDoS attacks, he also studied DDoS defense. As he became proficient in mitigating attacks on Minecraft servers, he decided to create ProTraf Solutions.

Paras’s obsession with Minecraft attacks and defense, compounded by his untreated ADHD, led to an even greater retreat from family and school. His poor academic performance in high school frustrated and depressed him. His only solace was Japanese anime and the admiration he gained from the online community of Minecraft DDoS experts.

Paras’s struggles deteriorated into paralysis when he enrolled in Rutgers, studying for a B.S. in computer science. Without his mother’s help, he was unable to regulate the normal demands of living on his own. He could not manage his sleep, schedule, or study. Paras was also acutely lonely. So he immersed himself in hacking.

Paras and two hacker friends, Josiah White and Dalton Norman, decided to go after the kings of DDoS—a gang known as VDoS. The gang had been providing these services to the world for four years, which is an eternity in cybercrime. The decision to fight experienced cybercriminals may seem brave, but the trio were actually older than their rivals. The VDoS gang members had been only 14 years old when they started to offer DDoS services from Israel in 2012. These 19-year-old American teenagers would be going to battle against two 18-year-old Israeli teenagers. The war between the two teenage gangs would not only change the nature of malware. Their struggle for dominance in cyberspace would create a doomsday machine.

Bots for Tots - Here’s how three teenagers built a botnet that could take down the Internet


The Mirai botnet, with all its devastating potential, was not the product of an organized-crime or nation-state hacking group—it was put together by three teenage boys. They rented out their botnet to paying customers to do mischief with and used it to attack chosen targets of their own. But the full extent of the danger became apparent only later, after this team made the source code for their malware public. Then others used it to do greater harm: crashing Germany’s largest Internet service provider; attacking Dyn’s Domain Name System servers, making the Internet unusable for millions; and taking down all of Liberia’s Internet—to name a few examples.

The Mirai botnet exploited vulnerable Internet of Things devices, such as Web-connected video cameras, ones that supported Telnet, an outdated system for logging in remotely. Owners of these devices rarely updated their passwords, so they could be easily guessed using a strategy called a dictionary attack.

The first step in assembling a botnet was to scan random IP addresses looking for vulnerable IoT devices, ones whose passwords could be guessed. Once identified, the addresses of these devices were passed to a “loader,” which would put the malware on the vulnerable device. Infected devices located all over the world could then be used for distributed denial-of-service attacks, orchestrated by a command-and-control (C2) server. When not attacking a target, these bots would be enlisted to scan for more vulnerable devices to infect.

Botnet Madness

Botnet malware is useful for financially motivated crime because botmasters can tell the bots in their thrall to implant malware on vulnerable machines, send phishing emails, or engage in click fraud, in which botnets profit by directing bots to click pay-per-click ads. Botnets are also great DDoS weapons because they can be trained on a target and barrage it from all directions. One day in February 2000, for example, the hacker MafiaBoy knocked out Fifa.com, Amazon.com, Dell, E-Trade, eBay, CNN, as well as Yahoo, at the time the largest search engine on the Internet.

After taking so many major websites offline, MafiaBoy was deemed a national-security threat. President Clinton ordered a national manhunt to find him. In April 2000, MafiaBoy was arrested and charged, and in January 2001 he pled guilty to 58 charges of denial-of-service attacks. Law enforcement did not reveal MafiaBoy’s real name, as this national-security threat was 15 years old.

Both MafiaBoy and the VDoS crew were adolescent boys who crashed servers. But whereas MafiaBoy did it for the sport, VDoS did it for the money. Indeed, these teenage Israeli kids were pioneering tech entrepreneurs. They helped launch a new form of cybercrime: DDoS as a service. With it, anyone could now hack with the click of a button, no technical knowledge needed.

It might be surprising that DDoS providers could advertise openly on the Web. After all, DDoSing another website is illegal everywhere. To get around this, these “booter services” have long argued they perform a legitimate function: providing those who set up Web pages a means to stress test websites.

In theory, such services do play an important function. But only in theory. As a booter-service provider admitted to University of Cambridge researchers, “We do try to market these services towards a more legitimate user base, but we know where the money comes from.”

The Botnets of August

Paras dropped out of Rutgers in his sophomore year and, with his father’s encouragement, spent the next year focused on building ProTraf Solutions, his DDoS-mitigation business. And just like a mafia don running a protection racket, he had to make that protection needed. After launching four DDoS attacks his freshman year, he attacked Rutgers yet again in September 2015, still hoping that his former school would give up on Incapsula. Rutgers refused to budge.

ProTraf Solutions was failing, and Paras needed cash. In May 2016, Paras reached out to Josiah White. Like Paras, Josiah frequented Hack Forums. When he was 15, he developed major portions of Qbot, a botnet worm that at its height in 2014 had enslaved half a million computers. Now 18, Josiah switched sides and worked with his friend Paras at ProTraf doing DDoS mitigation.

This diagram shows a hacker, his C2 server, multiple bots, and the victim\u2019s servers. The hacker’s command-and-control (C2) server orchestrates the actions of many geographically distributed bots (computers under its control). Those computers, which could be IoT devices like IP cameras, can be directed to overwhelm the victim’s servers with unwanted traffic, making them unable to respond to legitimate requests. IEEE Spectrum

But Josiah soon returned to hacking and started working with Paras to take the Qbot malware, improve it, and build a bigger, more powerful DDoS botnet. Paras and Josiah then partnered with 19-year-old Dalton Norman. The trio turned into a well-oiled team: Dalton found the vulnerabilities; Josiah updated the botnet malware to exploit these vulnerabilities; and Paras wrote the C2—software for the command-and-control server—for controlling the botnet.

But the trio had competition. Two other DDoS gangs—Lizard Squad and VDoS—decided to band together to build a giant botnet. The collaboration, known as PoodleCorp, was successful. The amount of traffic that could be unleashed on a target from PoodleCorp’s botnet hit a record value of 400 gigabits per second, almost four times the rate that any previous botnet had achieved. They used their new weapon to attack banks in Brazil, U.S. government sites, and Minecraft servers. They achieved this firepower by hijacking 1,300 Web-connected cameras. Web cameras tend to have powerful processors and good connectivity, and they are rarely patched. So a botnet that harnesses video has enormous cannons at its disposal.

While PoodleCorp was on the rise, Paras, Josiah, and Dalton worked on a new weapon. By the beginning of August 2016, the trio had completed the first version of their botnet malware. Paras called the new code Mirai, after the anime series Mirai Nikki.

When Mirai was released, it spread like wildfire. In its first 20 hours, it infected 65,000 devices, doubling in size every 76 minutes. And Mirai had an unwitting ally in the botnet war then raging.

Up in Anchorage, Alaska, the FBI cyber unit was building a case against VDoS. The FBI was unaware of Mirai or its war with VDoS. The agents did not regularly read online boards such as Hack Forums. They did not know that the target of their investigation was being decimated. The FBI also did not realize that Mirai was ready to step into the void.

The head investigator in Anchorage was Special Agent Elliott Peterson. A former U.S. Marine, Peterson is a calm and self-assured agent with a buzz cut of red hair. At the age of 33, Peterson had returned to his native state of Alaska to prosecute cybercrime.

On 8 September 2016, the FBI’s Anchorage and New Haven cyber units teamed up and served a search warrant in Connecticut on the member of PoodleCorp who ran the C2 that controlled all its botnets. On the same day, the Israeli police arrested the VDoS founders in Israel. Suddenly, PoodleCorp was no more.

The Mirai group waited a couple of days to assess the battlefield. As far as they could tell, they were the only botnet left standing. And they were ready to use their new power. Mirai won the war because Israeli and American law enforcement arrested the masterminds behind PoodleCorp. But Mirai would have triumphed anyway, as it was ruthlessly efficient in taking control of Internet of Things devices and excluding competing malware.

A few weeks after the arrests of those behind VDoS, Special Agent Peterson found his next target: the Mirai botnet. In the Mirai case, we do not know the exact steps that Peterson’s team took in their investigation: Court orders in this case are currently “under seal,” meaning that the court deems them secret. But from public reporting, we know that Peterson’s team got its break in the usual way—from a Mirai victim: Brian Krebs, a cybersecurity reporter whose blog was DDoSed by the Mirai botnet on 25 September.

The FBI uncovered the IP address of the C2 and loading servers but did not know who had opened the accounts. Peterson’s team likely subpoenaed the hosting companies to learn the names, emails, cellphones, and payment methods of the account holders. With this information, it would seek court orders and then search warrants to acquire the content of the conspirators’ conversations.

Still, the hunt for the authors of the Mirai malware must have been a difficult one, given how clever these hackers were. For example, to evade detection Josiah didn’t just use a VPN. He hacked the home computer of a teenage boy in France and used his computer as the “exit node.” The orders for the botnet, therefore, came from this computer. Unfortunately for the owner, he was a big fan of Japanese anime and thus fit the profile of the hacker. The FBI and the French police discovered their mistake after they raided the boy’s house.

Done and Done For

After wielding its power for two months, Paras dumped nearly the complete source code for Mirai on Hack Forums. “I made my money, there’s lots of eyes looking at IOT now, so it’s time to GTFO [Get The F*** Out],” Paras wrote. With that code dump, Paras had enabled anyone to build their own Mirai. And they did.

Dumping code is reckless, but not unusual. If the police find source code on a hacker’s devices, they can claim that they “downloaded it from the Internet.” Paras’s irresponsible disclosure was part of a false-flag operation meant to throw off the FBI, which had been gathering evidence indicating Paras’s involvement in Mirai and had contacted him to ask questions. Though he gave the agent a fabricated story, getting a text from the FBI probably terrified him.

Mirai had captured the attention of the cybersecurity community and of law enforcement. But not until after Mirai’s source code dropped would it capture the attention of the entire United States. The first attack after the dump was on 21 October, on Dyn, a company based in Manchester, N.H., that provides Domain Name System (DNS) resolution services for much of the East Coast of the United States.

An illustration of a hand with circular icons over it.  Mike McQuade

It began at 7:07 a.m. EST with a series of 25-second attacks, thought to be tests of the botnet and Dyn’s infrastructure. Then came the sustained assaults: of one hour, and then five hours. Interestingly, Dyn was not the only target. Sony’s PlayStation video infrastructure was also hit. Because the torrents were so immense, many other websites were affected. Domains such as cnn.com, facebook.com, and nytimes.com wouldn’t work. For the vast majority of these users, the Internet became unusable. At 7:00 p.m., another 10-hour salvo hit Dyn and PlayStation.

Further investigations confirmed the point of the attack. Along with Dyn and PlayStation traffic, the botnet targeted Xbox Live and Nuclear Fallout game-hosting servers. Nation-states were not aiming to hack the upcoming U.S. elections. Someone was trying to boot players off their game servers. Once again—just like MafiaBoy, VDoS, Paras, Dalton, and Josiah—the attacker was a teenage boy, this time a 15-year-old in Northern Ireland named Aaron Sterritt.

Meanwhile, the Mirai trio left the DDoS business, just as Paras said. But Paras and Dalton did not give up on cybercrime. They just took up click fraud.

Click fraud was more lucrative than running a booter service. While Mirai was no longer as big as it had been, the botnet could nevertheless generate significant advertising revenue. Paras and Dalton earned as much money in one month from click fraud as they ever made with DDoS. By January 2017, they had earned over US $180,000, as opposed to a mere $14,000 from DDoSing.

Had Paras and his friends simply shut down their booter service and moved on to click fraud, the world would likely have forgotten about them. But by releasing the Mirai code, Paras created imitators. Dyn was the first major copycat attack, but many others followed. And due to the enormous damage these imitators wrought, law enforcement was intensely interested in the Mirai authors.

After collecting information tying Paras, Josiah, and Dalton to Mirai, the FBI quietly brought each up to Alaska. Peterson’s team showed the suspects its evidence and gave them the chance to cooperate. Given that the evidence was irrefutable, each folded.

Paras Jha was indicted twice, once in New Jersey for his attack on Rutgers, and once in Alaska for Mirai. Both indictments carried the same charge—one violation of the Computer Fraud and Abuse Act. Paras faced up to 10 years in federal prison for his actions. Josiah and Dalton were only indicted in Alaska and so faced 5 years in prison.

The trio pled guilty. At the sentencing hearing held on 18 September 2018, in Anchorage, each of the defendants expressed remorse for his actions. Josiah White’s lawyer conveyed his client’s realization that Mirai was “a tremendous lapse in judgment.”

Unlike Josiah, Paras spoke directly to Judge Timothy Burgess in the courtroom. Paras began by accepting full responsibility for his actions and expressed his deep regret for the trouble he’d caused his family. He also apologized for the harm he’d caused businesses and, in particular, Rutgers, the faculty, and his fellow students.

The Department of Justice made the unusual decision not to ask for jail time. In its sentencing memo, the government noted “the divide between [the defendants’] online personas, where they were significant, well-known, and malicious actors in the DDoS criminal milieu and their comparatively mundane ‘real lives’ where they present as socially immature young men living with their parents in relative obscurity.” It recommended five years of probation and 2,500 hours of community service.

The government had one more requestfor that community service “to include continued work with the FBI on cybercrime and cybersecurity matters.” Even before sentencing, Paras, Josiah, and Dalton had logged close to 1,000 hours helping the FBI hunt and shut down Mirai copycats. They contributed to more than a dozen law enforcement and research efforts. In one instance, the trio assisted in stopping a nation-state hacking group. They also helped the FBI prevent DDoS attacks aimed at disrupting Christmas-holiday shopping. Judge Burgess accepted the government’s recommendation, and the trio escaped jail time.

The most poignant moments in the hearing were Paras’s and Dalton’s singling out for praise the very person who caught them. “Two years ago, when I first met Special Agent Elliott Peterson,” Paras told the court, “I was an arrogant fool believing that somehow I was untouchable. When I met him in person for the second time, he told me something I will never forget: ‘You’re in a hole right now. It’s time you stop digging.’ ” Paras finished his remarks by thanking “my family, my friends, and Agent Peterson for helping me through this.”

22 May. 2023


Many teenagers take a job at a restaurant or retail store, but Megan Dion got a head start on her engineering career. At 16, she landed a part-time position at FXB, a mechanical, electrical, and plumbing engineering company in Chadds Ford, Pa., where she helped create and optimize project designs.

She continued to work at the company during her first year as an undergraduate at the Stevens Institute of Technology, in Hoboken, N.J., where she is studying electrical engineering with a concentration in power engineering. Now a junior, Dion is part of the five-year Stevens cooperative education program, which allows her to rotate three full-time work placements during the second quarter of the school year through August. She returns to school full time in September with a more impressive résumé.

For her academic achievements, Dion received an IEEE Power & Energy Society scholarship and an IEEE PES Anne-Marie Sahazizian scholarship this year. The PES Scholarship Plus Initiative rewards undergraduates who one day are likely to build green technologies and change the way we generate and utilize power. Dion received US $2,000 from each scholarship toward her education.

She says she’s looking forward to networking with other scholarship recipients and IEEE members.

“Learning from other people’s stories and seeing myself in them and where my career could be in 10 or 15 years” motivates her, she says.

Gaining hands-on experience in power engineering

Dion’s early exposure to engineering came from her father, who owned a commercial electrical construction business for 20 years, and sparked her interest in the field. He would bring her along to meetings and teach her about the construction industry.

Then she was able to gain on-the-job experience at FXB, where she quickly absorbed what she observed around her.

“I would carry around a notebook everywhere I went, and I took notes on everything,” she says. “My team knew they never would have to explain something to me twice.”

“If I’m going to do something, I’m going to do it the best I can.”

She gained the trust of her colleagues, and they asked her to continue working with them while she attended college. She accepted the offer and supported a critical project at the firm: designing an underground power distribution and conduit system in the U.S. Virgin Islands to replace overhead power lines. The underground system could minimize power loss after hurricanes.

Skilled in AutoCAD software, she contributed to the electrical design. Dion worked directly with the senior electrical designer and the president of the company, and she helped deliver status updates. The experience, she says, solidified her decision to become a power engineer.

After completing her stint at FXB, she entered her first work placement through Stevens, which brought her to the Long Island Rail Road, in New York, through HNTB, an infrastructure design company in Kansas City, Mo. She completed an eight-month assignment at the LIRR, assisting the traction power and communications team in DC electrical system design for a major capacity improvement project for commuters in the New York metropolitan area.

Working on a railroad job was out of her comfort zone, she says, but she was up for the challenge.

“In my first meeting with the firm, I was in shock,” she says. “I was looking at train tracks and had to ask someone on the team to walk me through everything I needed to know, down to the basics.”

Dion describes how they spent two hours going through each type of drawing produced, including third-rail sectionalizing, negative-return diagrams, and conduit routing. Each sheet included 15 to 30 meters of a 3.2-kilometer section of track.

What Dion has appreciated most about the work placement program, she says, is learning about niche areas within power and electric engineering.

She’s now at her second placement, at structural engineering company Thornton Tomasetti in New York City, where she is diving into forensic engineering. The role interests her because of its focus on investigating what went wrong when an engineering project failed.

“My dad taught me to be 1 percent better each day.”

“It’s a career path I had never known about before,” she says. Thornton Tomasetti investigates when something goes awry during the construction process, determines who is likely at fault, and provides expert testimony in court.

Dion joined IEEE in 2020 to build her engineering network. She is preparing to graduate from Stevens next year, and then plans to pursue a master’s degree in electrical engineering while working full time.

The importance of leadership and business skills

To round out her experience and expertise in power and energy, Dion is taking business courses. She figures she might one day follow in her father’s entrepreneurial path.

“My dad is my biggest supporter as well as my biggest challenger,” she says. “He will always ask me ‘Why?’ to challenge my thinking and help me be the best I can be. He’s taught me to be 1 percent better each day.” She adds that she can go to him whenever she has an engineering question, pulling from his decades of experience in the industry.

Because of her background—growing up around the electrical industry—she has been less intimidated when she is the only woman in a meeting, she says. She finds that being a woman in a male-dominated industry is an opportunity, she says, adding that there is a lot of support and camaraderie among women in the field.

While excelling academically, she is also a starter on the varsity volleyball team at Stevens. She has played the sport since she was in the seventh grade. Her athletic background has taught her important skills, she says, including how to lead by example and the importance of ensuring the entire team is supported and working well together.

Dion’s competitive nature won’t allow her to hold herself back: “If I’m going to do something,” she says, “I’m going to do it the best I can.”

21 May. 2023


Inside today’s computers, phones, and other mobile devices, more and more sensors, processors, and other electronics are fighting for space. Taking up a big part of this valuable real estate are the cameras—just about every gadget needs a camera, or two, three, or more. And the most space-consuming part of the camera is the lens.

The lenses in our mobile devices typically collect and direct incoming light by refraction, using a curve in a transparent material, usually plastic, to bend the rays. So these lenses can’t shrink much more than they already have: To make a camera small, the lens must have a short focal length; but the shorter the focal length, the greater the curvature and therefore the thickness at the center. These highly curved lenses also suffer from all sorts of aberrations, so camera-module manufacturers use multiple lenses to compensate, adding to the camera’s bulk.

With today’s lenses, the size of the camera and image quality are pulling in different directions. The only way to make lenses smaller and better is to replace refractive lenses with a different technology.

That technology exists. It’s the metalens, a device developed at Harvard and commercialized at Metalenz, where I am an applications engineer. We create these devices using traditional semiconductor-processing techniques to build nanostructures onto a flat surface. These nanostructures use a phenomenon called metasurface optics to direct and focus light. These lenses can be extremely thin—a few hundred micrometers thick, about twice the thickness of a human hair. And we can combine the functionality of multiple curved lenses into just one of our devices, further addressing the space crunch and opening up the possibility of new uses for cameras in mobile devices.

Centuries of lens alternatives

Before I tell you how the metalens evolved and how it works, consider a few previous efforts to replace the traditional curved lens.

Conceptually, any device that manipulates light does so by altering its three fundamental properties: phase, polarization, and intensity. The idea that any wave or wave field can be deconstructed down to these properties was proposed by Christiaan Huygens in 1678 and is a guiding principle in all of optics.

a hand holding a tweezer, with a blowout showing an array of small pillars In this single metalens [between tweezers], the pillars are less than 500 nanometers in diameter. The black box at the bottom left of the enlargement represents 2.5 micrometers. Metalenz

In the early 18th century, the world’s most powerful economies placed great importance on the construction of lighthouses with larger and more powerful projection lenses to help protect their shipping interests. However, as these projection lenses grew larger, so did their weight. As a result, the physical size of a lens that could be raised to the top of a lighthouse and structurally supported placed limitations on the power of the beam that could be produced by the lighthouse.

French physicist Augustin-Jean Fresnel realized that if he cut a lens into facets, much of the central thickness of the lens could be removed but still retain the same optical power. The Fresnel lens represented a major improvement in optical technology and is now used in a host of applications, including automotive headlights and brake lights, overhead projectors, and—still—for lighthouse projection lenses. However, the Fresnel lens has limitations. For one, the flat edges of facets become sources of stray light. For another, faceted surfaces are more difficult to manufacture and polish precisely than continuously curved ones are. It’s a no-go for camera lenses, due to the surface accuracy requirements needed to produce good images.

Another approach, now widely used in 3D sensing and machine vision, traces its roots to one of the most famous experiments in modern physics: Thomas Young’s 1802 demonstration of diffraction. This experiment showed that light behaves like a wave, and when the waves meet, they can amplify or cancel one another depending on how far the waves have traveled. The so-called diffractive optical element (DOE) based on this phenomenon uses the wavelike properties of light to create an interference pattern—that is, alternating regions of dark and light, in the form of an array of dots, a grid, or any number of shapes. Today, many mobile devices use DOEs to convert a laser beam into “structured light.” This light pattern is projected, captured by an image sensor, then used by algorithms to create a 3D map of the scene. These tiny DOEs fit nicely into small gadgets, yet they can’t be used to create detailed images. So, again, applications are limited.

Enter the metalens

Enter the metalens. Developed at Harvard by a team led by professor Federico Capasso, then-graduate student Rob Devlin, research associates Reza Khorasaninejad, Wei Ting Chen, and others, metalenses work in a way that’s fundamentally different from any of these other approaches.

A metalens is a flat glass surface with a semiconductor layer on top. Etched in the semiconductor is an array of pillars several hundred nanometers high. These nanopillars can manipulate light waves with a degree of control not possible with traditional refractive lenses.

Imagine a shallow marsh filled with seagrass standing in water. An incoming wave causes the seagrass to sway back and forth, sending pollen flying off into the air. If you think of that incoming wave as light energy, and the nanopillars as the stalks of seagrass, you can picture how the properties of a nanopillar, including its height, thickness, and position next to other nanopillars, might change the distribution of light emerging from the lens.

gloved hands hold a semiconductor wafer A 12-inch wafer can hold up to 10,000 metalenses, made using a single semiconductor layer.Metalenz

We can use the ability of a metalens to redirect and change light in a number of ways. We can scatter and project light as a field of infrared dots. Invisible to the eye, these dots are used in many smart devices to measure distance, mapping a room or a face. We can sort light by its polarization (more on that in a moment). But probably the best way to explain how we are using these metasurfaces as a lens is by looking at the most familiar lens application—capturing an image.

The process starts by illuminating a scene with a monochromatic light source—a laser. (While using a metalens to capture a full-color image is conceptually possible, that is still a lab experiment and far from commercialization.) The objects in the scene bounce the light all over the place. Some of this light comes back toward the metalens, which is pointed, pillars out, toward the scene. These returning photons hit the tops of the pillars and transfer their energy into vibrations. The vibrations—called plasmons—travel down the pillars. When that energy reaches the bottom of a pillar, it exits as photons, which can be then captured by an image sensor. Those photons don’t need to have the same properties as those that entered the pillars; we can change these properties by the way we design and distribute the pillars.

From concept to commercialization

Researchers around the world have been exploring the concept of metalenses for decades.

In a paper published in 1968 in Soviet Physics Uspekhi, Russian physicist Victor Veselago put the idea of metamaterials on the map, hypothesizing that nothing precluded the existence of a material that exhibits a negative index of refraction. Such a material would interact with light very differently than a normal material would. Where light ordinarily bounces off a material in the form of reflection, it would pass around this type of metamaterial like water going around a boulder in a stream.

It took until 2000 before the theory of metamaterials was implemented in the lab. That year, Richard A. Shelby and colleagues at the University of California, San Diego, demonstrated a negative refractive index metamaterial in the microwave region. They published the discovery in 2001 in Science, causing a stir as people imagined invisibility cloaks. (While intriguing to ponder, creating such a device would require precisely manufacturing and assembling thousands of metasurfaces.)

The first metalens to create high-quality images with visible light came out of Federico Capasso’s lab at Harvard. Demonstrated in 2016, with a description of the research published in Science, the technology immediately drew interest from smartphone manufacturers. Harvard then licensed the foundational intellectual property exclusively to Metalenz, where it has now been commercialized.

Two diagrams, one showing a stack of differently curved lenses, one showing a single flat lens A single metalens [right] can replace a stack of traditional lenses [left], simplifying manufacturing and dramatically reducing the size of a lens package.Metalenz

Since then, researchers at Columbia University, Caltech, and the University of Washington, working with Tsinghua University, in Beijing, have also demonstrated the technology.

Much of the development work Metalenz does involves fine-tuning the way the devices are designed. In order to translate image features like resolution into nanoscale patterns, we developed tools to help calculate the way light waves interact with materials. We then convert those calculations into design files that can be used with standard semiconductor processing equipment.

The first wave of optical metasurfaces to make their way into mobile imaging systems have on the order of 10 million silicon pillars on a single flat surface only a few millimeters square, with each pillar precisely tuned to accept the correct phase of light, a painstaking process even with the help of advanced software. Future generations of the metalens won’t necessarily have more pillars, but they’ll likely have more sophisticated geometries, like sloped edges or asymmetric shapes.

Metalenses migrate to smartphones

Metalenz came out of stealth mode in 2021, announcing that it was getting ready to scale up production of devices. Manufacturing was not as big a challenge as design because the company manufactures metasurfaces using the same materials, lithography, and etching processes that it uses to make integrated circuits.

In fact, metalenses are less demanding to manufacture than even a very simple microchip because they require only a single lithography mask as opposed to the dozens required by a microprocessor. That makes them less prone to defects and less expensive. Moreover, the size of the features on an optical metasurface are measured in hundreds of nanometers, whereas foundries are accustomed to making chips with features that are smaller than 10 nanometers.

And, unlike plastic lenses, metalenses can be made in the same foundries that produce the other chips destined for smartphones. This means they could be directly integrated with the CMOS camera chips on site rather than having to be shipped to another location, which reduces their costs still further.

A pattern of dots over a figure of a person, hand raised A single meta-optic, in combination with an array of laser emitters, can be used to create the type of high-contrast, near-infrared dot or line pattern used in 3D sensing. Metalenz

In 2022, ST Microelectronics announced the integration of Metalenz’s metasurface technology into its FlightSense modules. Previous generations of FlightSense have been used in more than 150 models of smartphones, drones, robots, and vehicles to detect distance. Such products with Metalenz technology inside are already in consumer hands, though ST Microelectronics isn’t releasing specifics.

Indeed, distance sensing is a sweet spot for the current generation of metalens technology, which operates at near-infrared wavelengths. For this application, many consumer electronics companies use a time-of-flight system, which has two optical components: one that transmits light and one that receives it. The transmitting optics are more complicated. These involve multiple lenses that collect light from a laser and transform it to parallel light waves—or, as optical engineers call it, a collimated beam. These also require a diffraction grating that turns the collimated beam into a field of dots. A single metalens can replace all of those transmitting and receiving optics, saving real estate within the device as well as reducing cost.

And a metalens does the field-of-dots job better in difficult lighting conditions because it can illuminate a broader area using less power than a traditional lens, directing more of the light to where you want it.

The future is polarized

Conventional imaging systems, at best, gather information only about the spatial position of objects and their color and brightness.But the light carries another type of information: the orientation of the light waves as they travel through space—that is, the polarization. Future metalens applications will take advantage of the technology’s ability to detect polarized light.

The polarization of light reflecting off an object conveys all sorts of information about that object, including surface texture, type of surface material, and how deeply light penetrates the material before bouncing back to the sensor. Prior to the development of the metalens, a machine vision system would require complex optomechanical subsystems to gather polarization information. These typically rotate a polarizer—structured like a fence to allow only waves oriented at a certain angle to pass through—in front of a sensor. They then monitor how the angle of rotation impacts the amount of light hitting the sensor.

Red, orange, yellow, green, and blue, on a face, shown against a blue background Metasurface optics are capable of capturing polarization information from light, revealing a material’s characteristics and providing depth information.Metalenz

A metalens, by contrast, doesn’t need a fence; all the incoming light comes through. Then it can be redirected to specific regions of the image sensor based on its polarization state, using a single optical element. If, for example, light is polarized along the X axis, the nanostructures of the metasurface will direct the light to one section of the image sensor. However, if it is polarized at 45 degrees to the X axis, the light will be directed to a different section. Then software can reconstruct the image with information about all its polarization states.

Using this technology, we can replace previously large and expensive laboratory equipment with tiny polarization-analysis devices incorporated into smartphones, cars, and even augmented-reality glasses. A smartphone-based polarimeter could let you determine whether a stone in a ring is diamond or glass, whether concrete is cured or needs more time, or whether an expensive hockey stick is worth buying or contains micro cracks. Miniaturized polarimeters could be used to determine whether a bridge’s support beam is at risk of failure, whether a patch on the road is black ice or just wet, or if a patch of green is really a bush or a painted surface being used to hide a tank. These devices could also help enable spoof-proof facial identification, since light reflects off a 2D photo of a person at different angles than a 3D face and from a silicone mask differently than it does from skin. Handheld polarizers could improve remote medical diagnostics—for example, polarization is used in oncology to examine tissue changes.

But like the smartphone itself, it’s hard to predict where metalenses will take us. When Apple introduced the iPhone in 2008, no one could have predicted that it would spawn companies like Uber. In the same way, perhaps the most exciting applications of metalenses are ones we can’t even imagine yet.

19 May. 2023


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2023: 29 May–2 June 2023, LONDON
Energy Drone & Robotics Summit: 10–12 June 2023, HOUSTON
RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCE
RSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREA
IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREA
IROS 2023: 1–5 October 2023, DETROIT
CLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL
Humanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

LATTICE is an undergrad project from Caltech that’s developing a modular robotic transportation system for the lunar surface that uses autonomous rovers to set up a sort of cable car system to haul things like ice out of deep craters to someplace more useful. The prototype is fully functional, and pretty cool to watch in action.

We’re told that the team will be targeting a full system demonstration deploying across a “crater” on Earth this time next year. As to what those quotes around “crater” mean, your guess is as good as mine.

[ Caltech ]

Thanks, Lucas!

Happy World Cocktail Day from Flexiv!

[ Flexiv ]

Here’s what Optimus has been up to lately.

As per usual, the robot is moderately interesting, but it’s probably best to mostly just ignore Musk.

[ Tesla ]

The INSECT tarsus-inspired compliant robotic grippER with soft adhesive pads (INSECTER) uses only one single electric actuator with a cable-driven mechanism. It can be easily controlled to perform a gripping motion akin to an insect tarsus (i.e., wrapping around the object) for handling various objects.

[ Paper ]

Thanks, Poramate!

Congratulations to ANYbotics on their $50 million Series B!

And from 10 years ago (!) at ICRA 2013, here is video I took of StarlETH, one of ANYmal’s ancestors.

[ ANYbotics ]

In this video we present results from the recent field-testing campaign of the DigiForest project at Evo, Finland. The DigiForest project started in September 2022 and runs up to February 2026. It brings together diverse partners working on aerial robots, walking robots, autonomous lightweight harvesters, as well as forestry decision makers and commercial companies with the goal to create a full data pipeline for digitized forestry.

[ DigiForest ]

The Robotics and Perception Group at UZH will be presenting some new work on agile autonomous high-speed flight through cluttered environments at ICRA 2023.

[ Paper ]

Robots who lift together, stay together.

[ Sanctuary AI ]

The next CYBATHLON competition, which will take place again in 2024, breaks down barriers between the public, people with disabilities, researchers and technology developers. The initiative promotes the inclusion and participation of people with disabilities and improves assistance systems for use in everyday life by the end users.

[ Cybathlon ]

19 May. 2023


IEEE’s Vision, Innovation, and Challenges Summit and Honors Ceremony showcases emerging technologies and celebrates engineering pioneers who laid the groundwork for many of today’s electronic devices. I attended this year’s events, held on 4 and 5 May in Atlanta. Here are highlights of the sessions, which are available on IEEE.tv.

The summit kicked off on 4 May at the Georgia Aquarium with a reception and panel discussion on climate change and sustainability, moderated by Saifur Rahman, IEEE president and CEO. The panel featured Chris Coco, IEEE Fellow Alberto Moreira, and IEEE Member Jairo Garcia. Coco is senior director for aquatic sustainability at the aquarium. Moreira is director of the German Aerospace Center Microwaves and Radar Institute in Oberpfaffenhofen, Bavaria. Garcia is CEO of Urban Climate Nexus in Atlanta. UCN assists U.S. cities in creating and executing climate action and resilience plans.

The panelists focused on how the climate crisis is affecting the ocean and ways technology is helping to track environmental changes.

Coco said one of the biggest challenges facilities such as his are facing is finding enough food for their animals. Because sea levels and temperatures are rising, more than 80 percent of marine life is migrating toward the Earth’s poles and away from warmer water, he said. With fish and other species moving to new habitats, ocean predators that rely on them for food are following them. This migration is making it more difficult to find food for aquarium fish, Coco said. He added that technology on buoys is monitoring the water’s quality, and temperature, and levels.

Moreira, recipient of this year’s IEEE Dennis J. Picard Medal for Radar Technologies and Applications, developed a space-based synthetic aperture radar system that can monitor the Earth’s health. The system, consisting of two orbiting satellites, generates 3D maps of the planet’s surface with 2-meter accuracy and lets researchers track sea levels and deforestation. Policymakers can use the data, Moreira said, to mitigate the impact or adapt to the changes.

a group of people standing on stage with medals around their necks and a person holding a framed image of a portrait of a person Those who developed technologies that changed people’s lives were recognized at the 2023 Honor Ceremony in Atlanta.Robb Cohen Photography & Video

Bridging the digital divide, ethics in AI, and the role of robotics

The IEEE Vision, Innovation, and Challenges Summit got underway on 5 May at the Hilton Atlanta, featuring panel discussions with several of this year’s award recipients about concerns related to information and communication technology (ICT), career advice, and artificial intelligence.

The event kicked off with a “fireside chat” between Vint Cerf and Katie Hafner, a technology journalist. Cerf, widely known as the “Father of the Internet,” is the recipient of this year’s IEEE Medal of Honor. He is being recognized for helping to create “the Internet architecture and providing sustained leadership in its phenomenal growth in becoming society’s critical infrastructure.”

Reflecting on his career, Cerf said “the most magical thing that came out of the Internet is the collection of people that came together to design, build, and get the Internet to work.”

The IEEE Life Fellow also spoke about the biggest challenges society faces today with ICT, including the digital divide and people using the Internet maliciously.

“I don’t want anyone to be denied access to the Internet, whether it’s because they don’t have physical access or can’t afford the service,” Cerf said. “We’re seeing a rapid increase in access recently, and I’m sure before the end of this decade anyone who wants access will have it.”

But, he added, “People are doing harmful things on the Internet to other people, such as ransomware, malware, and disinformation. I’m not surprised this is happening. It’s human frailty being replicated in the online environment. The hard part is figuring out what to do about it.”

During the Innovators Showcase session, panelists Luc Van den hove, IEEE Life Fellow Melba M. Crawford, and IEEE Fellow James Truchard offered advice on how to lead a successful company or research lab. They agreed that it’s important to bring together people from multiple disciplines and to ensure the market is ready for the product in development.

As for moving up the career ladder, Truchard said people should not exclude the role of luck.

“Engineering changes the way the world works.”

“Nothing beats dumb luck,” he said, laughing. He is a former president and CEO of National Instruments, an engineering-solutions company he helped found in Austin, Texas. He is the recipient of the IEEE James H. Mulligan Jr. Education Medal.

With the launch of ChatGPT, generative AI has become a hot topic among technologists. The “Artificial Intelligence and ChatGPT” panel focused on the ethics of generative AI and how educators can adapt the tools in classrooms. The panelists—IEEE Senior Member Carlotta Berry, IEEE Fellow Lydia E. Kavraki, and IEEE Life Fellow Rodney Brooks—also touched on what applications robots could benefit in the future. The three have robotics backgrounds.

They agreed that when an image or text was created using generative AI, that fact needs to be made clear, especially on social media platforms.

One way to accomplish that, Berry said, is to implement policies that require documentation. Berry, a professor of electrical and computer engineering at the Rose-Hulman Institute of Technology, in Terre Haute, Ind., emphasized how gender and racial biases remain problems with AI.

Because schools won’t be able to stop students from using tools such as ChatGPT, she said, educators need to teach them how to analyze data and how to tell whether a source is valid. Berry is the recipient of the IEEE Undergraduate Teaching Award.

Brooks, an MIT robotics professor and cofounder of iRobot, said robots can help mitigate the effects of climate change and could help in caring for the elderly.

“We aren’t going to have enough people to look after them,” he said, “and it’s going to be a real problem fairly soon. We need to find a way to help the aging population maintain independence and dignity.” Brooks is the recipient of the IEEE Founders Medal.

AI and robots can be used to monitor the health of the Earth, remove pollutants from water and soil, and better understand viruses such as the one that causes COVID-19, Kavraki said. The IEEE Fellow, a computer science professor at Rice University, in Houston, is the recipient of the IEEE Frances E. Allen Medal.

Pioneers of the QR code, the cochlear implant, and the Internet

The evening’s Honor Ceremony recognized those who developed technologies that changed people’s lives, including the QR code, cochlear implants, and the Internet.

The IEEE Corporate Innovation Award went to Japanese automotive manufacturer Denso, located in Aichi, for “the innovation of QR (Quick Response) code and their widespread use across the globe.” The company’s CEO, Koji Arima, accepted the award. In his speech, the IEEE member said Denso is “committed to developing technology that makes people happy.”

About 466 million people have hearing loss, according to the World Health Organization. To help those who are hearing impaired, in the 1970s husband and wife Erwin and Ingeborg Hochmair developed the multichannel cochlear implant. For their invention, the duo are the recipients of the IEEE Alexander Graham Bell Medal.

“We hope to continue IEEE’s mission of developing technology for the benefit of humanity,” Ingeborg, an IEEE senior member, said in her acceptance speech.

The ceremony ended with the presentation of the IEEE Medal of Honor to Cerf, who received a standing ovation.

“Engineering changes the way the world works,” he said. He ended with a promise: “You ain’t seen nothing yet.”

You can watch the IEEE Awards Ceremony on IEEE.tv.

18 May. 2023




Russia’s invasion of Ukraine in 2022 put Ukrainian communications in a literal jam: Just before the invasion, Russian hackers knocked out Viasat satellite ground receivers across Europe. Then entrepreneur Elon Musk swept in to offer access to Starlink, SpaceX’s growing network of low Earth orbit (LEO) communications satellites. Musk soon reported that Starlink was suffering from jamming attacks and software countermeasures.

In March, the U.S. Department of Defense (DOD) concluded that Russia was still trying to jam Starlink, according to documents leaked by U.S. National Guard airman Ryan Teixeira and seen by the Washington Post. Ukrainian troops have likewise blamed problems with Starlink on Russian jamming, the website Defense One reports. If Russia is jamming a LEO constellation, it would be a new layer in the silent war in space-ground communications.

“There is really not a lot of information out there on this,” says Brian Weeden, the director of program planning for the Secure World Foundation, a nongovernmental organization that studies space governance. But, Weeden adds, “my sense is that it’s much harder to jam or interfere with Starlink [than with GPS satellites].”

LEO Satellites Face New Security Risks

Regardless of their altitude or size, communications satellites transmit more power and therefore require more power to jam than navigational satellites. However, compared with large geostationary satellites, LEO satellites—which orbit Earth at an altitude of 2,000 kilometers or lower—have frequent handovers that “introduce delays and opens up more surface for interference,” says Mark Manulis, a professor of privacy and applied cryptography at the University of the Federal Armed Forces’ Cyber Defense Research Institute (CODE) in Munich, Germany.


A graphic of the Earth with three rings around it. Each ring represents a different orbit for satellites. The ring closest to the Earth is low earth orbit.


Security and communications researchers are working on defenses and countermeasures, mostly behind closed doors, but it is possible to infer from a few publications and open-source research how unprepared many LEO satellites are for direct attacks and some of the defenses that future LEO satellites may need.

For years, both private companies and government agencies have been planning LEO constellations, each numbering thousands of satellites. The DOD, for example, has been designing its own LEO satellite network to supplement its more traditional geostationary constellations for more than a decade and has already begun issuing contracts for the constellation’s construction. University research groups are also launching tiny, standardized cube satellites (CubeSats) into LEO for research and demonstration purposes. This proliferation of satellite constellations coincides with the emergence of off-the-shelf components and software-defined radio—both of which make the satellites more affordable, but perhaps less secure.

Russia’s defense agencies commissioned a system called Tobol that’s designed to counter jammers that might interfere with their own satellites, reported journalist and author Bart Hendrickx. That implies that Russia either can transmit jamming signals up to satellites, or suspects that adversaries can.

Many of the agencies and organizations launching the latest generation of low-cost satellites haven’t addressed the biggest security issues they face, researchers wrote in one review of LEO security in 2022. That may be because one of the temptations of LEO is the ability of relatively cheap new hardware to do smaller jobs.

“Satellites are becoming smaller. They are very purpose-specific,” says Ijaz Ahmad, a telecoms security researcher at the VTT Technical Research Centre in Espoo, Finland. “They have less resources for computing, processing, and also memory.” Less computing power means fewer encryption capabilities, as well as less ability to detect and respond to jamming or other active interference.

The rise of software-defined radio (SDR) has also made it easier to get hardware to accomplish new things, including allowing small satellites to cover many frequency bands. “When you make it programmable, you provide that hardware with some sort of remote connectivity so you can program it. But if the security side is overlooked, it will have severe consequences,” Ahmad says.

“At the moment there are no good standards focused on communications for LEO satellites.”
—Mark Manulis, professor of privacy and applied cryptography, University of the Federal Armed Forces

Among those consequences are organized criminal groups hacking and extorting satellite operators or selling information they have captured.

One response to the risks of software-defined radio and the fact that modern low-cost satellites require firmware updates is to include some simple physical security. Starlink did not respond to requests for comments on its security, but multiple independent researchers said they doubt today’s commercial satellites match military-grade satellite security countermeasures, or even meet the same standards as terrestrial communications networks. Of course, physical security can be defeated with a physical attack, and state actors have satellites capable of changing their orbits and grappling with, and thus perhaps physically hacking, communications satellites, the Secure World Foundation stated in an April report.

LEO Satellites Need More Focus on Cryptography, Hardware

Despite that vulnerability, LEO satellites do bring certain advantages in a conflict: There are more of them, and they cost less per satellite. Attacking or destroying a satellite “might have been useful against an adversary who only has a few high-value satellites, but if the adversary has hundreds or thousands, then it’s a lot less of an impact,” Weeden says. LEO also offers a new option: sending a message to multiple satellites for later confirmation. That wasn’t possible when only a handful of GEO satellites covered Earth, but it is a way for cooperating transmitters and receivers to ensure that a message gets through intact. According to a 2021 talk by Vijitha Weerackody, a communications engineer at Johns Hopkins University, as few as three LEO satellites may be enough for such cooperation.

Even working together, future LEO constellation designers may need to respond with improved antennas, radio strategies that include spread spectrum modulation, and both temporal and transform-domain adaptive filtering. These strategies come at a cost to data transmission and complexity. But such measures may still be defeated by a strong enough signal that covers the satellite’s entire bandwidth and saturates its electronics.

“There’s a need to introduce a strong cryptographic layer,” says Manulis. “At the moment there are no good standards focused on communications for LEO satellites. Governments should push for standards in that area relying on cryptography.” The U.S. National Institute of Standards and Technology does have draft guidelines for commercial satellite cybersecurity that satellite operator OneWeb took into account when designing its LEO constellation, says OneWeb principal cloud-security architect Wendy Ng: “Hats off to them, they do a lot of work speaking to different vendors and organizations to make sure they’re doing the right thing.”

OneWeb uses encryption in its control channels, something a surprising number of satellite operators fail to do, says Johannes Willbold, a doctoral student at Ruhr University, in Bochum, Germany. Willbold is presenting his analysis of three research satellites’ security on 22 May 2023 at the IEEE Symposium on Security and Privacy. “A lot of satellites had straight-up no security measures to protect access in the first place,” he says.

Securing the growing constellations of LEO satellites matters to troops in trenches, investors in any space endeavor, anyone traveling into Earth orbit or beyond, and everyone on Earth who uses satellites to navigate or communicate. “I’m hoping there will be more initiatives where we can come together and share best practices and resources,” says OneWeb’s Ng. Willbold, who cofounded an academic workshop on satellite security, is optimistic that there will be: “It’s surprising to me how many people are now in the field, and how many papers they submitted.”

17 May. 2023


Ever since Lwanga Herbert was a youngster growing up in Kampala, Uganda, he wanted to create technology to improve his community. While attending a vocational school, he participated in a project that sought technological solutions for local electricians who were having problems troubleshooting systems.

Herbert helped develop a detector to measure voltage levels in analog electronics; a pulse detector to identify digital pulses and signals; and a proximity alarm system. The tools he helped develop made troubleshooting easier and faster for the electricians. When he understood the impact his work had, he was inspired to pursue engineering as a career.

“I saw firsthand that technology increases the speed, efficiency, and effectiveness of solving challenges communities face,” he says.

The devices were recognized by the Uganda National Council for Science and Technology. The level and pulse detectors were registered as intellectual property through the African Regional Intellectual Property Organization.

Herbert now works to use technology to address challenges faced by Uganda as a whole, such as high neonatal death rates.

The IEEE member is the innovation director at the Log’el Science Foundation. The nonprofit, which was launched in 2001, works to foster technological development in Uganda. It strives to enable a more competitive job market by helping startups succeed and by sparking interest in science, technology, engineering, and math careers.

Herbert has been active with IEEE’s humanitarian programs and is chair of the newly established IEEE Humanitarian Technology Board. HTB will oversee and support all humanitarian activities across IEEE and is responsible for fostering new collaborations. It also will fund related projects and activities.

Because of his busy schedule, The Institute conducted this interview via email. We asked him about the goals of the Log’el Science Foundation, his humanitarian work, and how his IEEE membership has advanced his career. His answers have been edited for clarity.

The Institute: What are you working on at the foundation?

Lwanga Herbert: The foundation has four main projects: an incubation program; STEM education outreach; internship opportunities for both undergraduate and graduate students; and entrepreneurship development.

The incubation program assists technology startups during their vulnerable inception stages, enabling them to grow and flourish. The objective is to encourage and promote innovation-based entrepreneurship by providing assistance and support such as mentorship, connecting participants to business and technical institutions, and facilitating courses on a range of technology and management topics.

The STEM education program engages youth across the country by arranging professional engineers to talk to students in primary school, high school, and college about their work. This greatly inspires and motivates them to embrace a career in STEM.

The goal of connecting students to internships is to help them put the theoretical knowledge they learned at school into practice. The program helps prepare young learners for the workplace and provides them with career development opportunities.

The entrepreneurship program’s goal is to instill the culture of entrepreneurship into the mindset of young people. [The program teaches business skills and holds competitions.] The Log’el Science Foundation hopes this leads to the creation of rich and creative business, scientific, technological, agricultural, and production operations in Uganda.

What kind of impact have you seen from the programs?

Herbert: They have enabled students to secure employment much faster than before and allowed their self-confidence to rise. Because they have more self-confidence, students have been able to start and operate successful business ventures. The outreach programs also enable young learners to develop and strengthen their interests in STEM-related career paths.

What challenges have you faced at your job, and how did you overcome them?

Herbert: One of the key challenges is that the innovation process takes time to produce results, and therefore I need a lot of patience and sustained focus. I always remind myself to have hope, commitment, and passion when dealing with the process.

Another challenge is making sure I stay inspired and motivated. Working in a non-inspiring and non-motivating society can bring down an innovator’s self-confidence and sense of direction. I have found that networking with a wide variety of people can help keep my morale up.

Is there a humanitarian effort you’ve been a part of that stands out?

Herbert: I led an IEEE Humanitarian Activities Committee-supported project in 2019 that aimed to reduce neonatal death rates and injuries among newborn babies in Uganda.

There are considerable gaps in neonatal health care because of understaffing and a lack of functional medical equipment. Many neonatal deaths can be prevented with proper equipment.

Both IEEE programs collaborated with Neopenda, a health tech startup founded in 2015 that designs and manufactures wearables. The device we developed monitored four major vital signs of a newborn: heart rate, respiration, blood oxygen saturation, and temperature. If any abnormalities in the vital signs are identified they can be corrected accordingly, and in a timely manner, and thereby [help] prevent ill health or even death.

When did you join IEEE and why? How has being a member benefited your career?

Herbert: I joined in 2009 when I was a student at Kyambogo University in Kampala, because of its collaborative environment, global membership, and humanitarian efforts.

As an IEEE member I have been able to improve my professional skills by learning how to be a team player, understand market needs, and view challenges as opportunities and develop solutions to those challenges. It has also provided me with opportunities to contribute my knowledge to the technological community and learn how to work with people across the globe.

Why is the formation of the HTB important for IEEE?

Herbert: The elevation of what was previously the IEEE Humanitarian Activities Committee to the new HTB reflects the growing numbers of IEEE Special Interest Group on Humanitarian Technology (SIGHT) membership, project proposals, and funded teams. It also reflects the fact that 30 percent of all active IEEE members, and 60 percent of active IEEE student members, indicate an interest in the organization’s humanitarian programs when they join IEEE or renew their annual membership.

It demonstrates the support of IEEE leaders, who have provided us with the structure to expand our role in supporting humanitarian technology activities across IEEE. Now we are poised to unite efforts, share best practices, and better capture the entire story of humanitarian technology at IEEE. We can use that to play a more coordinated role in the global humanitarian technology space with the ultimate goal of more effectively helping the world.

What are your goals as the first chair of HTB?

Herbert: Some of HTB’s goals this year include strengthening and expanding partnerships and collaborations with IEEE entities; enhancing support for humanitarian technologies and sustainable development activities; facilitating capacity building so IEEE members can access more educational resources and opportunities in the area of humanitarian technology and sustainable development; and creating awareness to increase the understanding of the role of engineering and technology in sustainable development.

Earlier this year HTB held a call for proposals in collaboration with IEEE SIGHT for IEEE member grassroots projects that utilize technology to address pressing needs of the members’ local communities. For the first time, the areas of technical interest included sustainable development. The call for proposals also sought projects that use existing technologies to help solve challenges faced by people with disabilities or collaborate with local organizations that serve people with disabilities.

Serving as the first chair of HTB with its expanded role and responsibilities sounds like a daunting task as there is a lot to be done. The good news is that HTB is building upon the solid foundation and benefits from new board members who represent the Member and Geographic Activities Board, Technical Activities Board, Educational Activities Board, Standards Association Board, and IEEE Young Professionals. With this team, I feel strongly that we can accomplish HTB’s mission, yearly goals, and continue to make a lasting impact.
17 May. 2023


Rapid and pivotal advances in technology have a way of unsettling people, because they can reverberate mercilessly, sometimes, through business, employment, and cultural spheres. And so it is with the current shock and awe over large language models, such as GPT-4 from OpenAI.

It’s a textbook example of the mixture of amazement and, especially, anxiety that often accompanies a tech triumph. And we’ve been here many times, says Rodney Brooks. Best known as a robotics researcher, academic, and entrepreneur, Brooks is also an authority on AI: he directed the Computer Science and Artificial Intelligence Laboratory at MIT until 2007, and held faculty positions at Carnegie Mellon and Stanford before that. Brooks, who is now working on his third robotics startup, Robust.AI, has written hundreds of articles and half a dozen books and was featured in the motion picture Fast, Cheap & Out of Control. He is a rare technical leader who has had a stellar career in business and in academia and has still found time to engage with the popular culture through books, popular articles, TED Talks, and other venues.

“It gives an answer with complete confidence, and I sort of believe it. And half the time, it’s completely wrong.”
—Rodney Brooks, Robust.AI

IEEE Spectrum caught up with Brooks at the recent Vision, Innovation, and Challenges Summit, where he was being honored with the 2023 IEEE Founders Medal. He spoke about this moment in AI, which he doesn’t regard with as much apprehension as some of his peers, and about his latest startup, which is working on robots for medium-size warehouses.

Rodney Brooks on…

You wrote a famous article in 2017, “The Seven Deadly Sins of AI Prediction.“ You said then that you wanted an artificial general intelligence to exist—in fact, you said it had always been your personal motivation for working in robotics and AI. But you also said that AGI research wasn’t doing very well at that time at solving the basic problems that had remained intractable for 50 years. My impression now is that you do not think the emergence of GPT-4 and other large language models means that an AGI will be possible within a decade or so.

Rodney Brooks: You’re exactly right. And by the way, GPT-3.5 guessed right—I asked it about me, and it said I was a skeptic about it. But that doesn’t make it an AGI.

The large language models are a little surprising. I’ll give you that. And I think what they say, interestingly, is how much of our language is very much rote, R-O-T-E, rather than generated directly, because it can be collapsed down to this set of parameters. But in that “Seven Deadly Sins” article, I said that one of the deadly sins was how we humans mistake performance for competence.

If I can just expand on that a little. When we see a person with some level performance at some intellectual thing, like describing what’s in a picture, for instance, from that performance, we can generalize about their competence in the area they’re talking about. And we’re really good at that. Evolutionarily, it’s something that we ought to be able to do. We see a person do something, and we know what else they can do, and we can make a judgement quickly. But our models for generalizing from a performance to a competence don’t apply to AI systems.

The example I used at the time was, I think it was a Google program labeling an image of people playing Frisbee in the park. And if a person says, “Oh, that’s a person playing Frisbee in the park,” you would assume you could ask him a question, like, “Can you eat a Frisbee?” And they would know, of course not; it’s made of plastic. You’d just expect they’d have that competence. That they would know the answer to the question, “Can you play Frisbee in a snowstorm? Or, how far can a person throw a Frisbee? Can they throw it 10 miles? Can they only throw it 10 centimeters?” You’d expect all that competence from that one piece of performance: a person saying, “That’s a picture of people playing Frisbee in the park.”

“What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be.”
—Rodney Brooks, Robust.AI

We don’t get that same level of competence from the performance of a large language model. When you poke it, you find that it doesn’t have the logical inference that it may have seemed to have in its first answer.

I’ve been using large language models for the last few weeks to help me with the really arcane coding that I do, and they’re much better than a search engine. And no doubt, that’s because it’s 4,000 parameters or tokens. Or 60,000 tokens. So it’s a lot better than just a 10-word Google search. More context. So when I’m doing something very arcane, it gives me stuff.

But what I keep having to do, and I keep making this mistake—it answers with such confidence any question I ask. It gives an answer with complete confidence, and I sort of believe it. And half the time, it’s completely wrong. And I spend 2 or 3 hours using that hint, and then I say, “That didn’t work,” and it just does this other thing. Now, that’s not the same as intelligence. It’s not the same as interacting. It’s looking it up.

It sounds like you don’t think GPT-5 or GPT-6 is going to make a lot of progress on these issues.

Brooks: No, because it doesn’t have any underlying model of the world. It doesn’t have any connection to the world. It is correlation between language.

By the way, I recommend a long blog post by Stephen Wolfram. He’s also turned it into a book.

I’ve read it. It’s superb.

Brooks: It gives a really good technical understanding. What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be.

Not long after ChatGPT and GPT-3.5 went viral last January, OpenAI was reportedly considering a tender offer that valued the company at almost $30 billion. Indeed, Microsoft invested an amount that has been reported as $10 billion. Do you think we’re ever going to see anything come out of this application that will justify these kind of numbers?

Brooks: Probably not. My understanding is that Microsoft’s initial investment was in time on the cloud computing rather than hard, cold cash. OpenAI certainly needed [cloud computing time] to build these models because they’re enormously expensive in terms of the computing needed. I think what we’re going to see—and I’ve seen a bunch of papers recently about boxing in large language models—is much smoother language interfaces, input and output. But you have to box things in carefully so that the craziness doesn’t come out, and the making stuff up doesn’t come out.

“I think they’re going to be better than the Watson Jeopardy! program, which IBM said, ‘It’s going to solve medicine.’ Didn’t at all. It was a total flop. I think it’s going to be better than that.”
—Rodney Brooks, Robust.AI

So you’ve got to box things in because it’s not a database. It just makes up stuff that sounds good. But if you box it in, you can get really much better language than we’ve had before.

So when the smoke clears, do you think we’ll have major applications? I mean, putting aside the question of whether they justify the investments or the valuations, is it going to still make a mark?

Brooks: I think it’s going to be another thing that’s useful. It’s going to be better language input and output. Because of the large numbers of tokens that get buffered up, you get much better context. But you have to box it so much…I am starting to see papers, how to put this other stuff on top of the language model. And sometimes it’s traditional AI methods, which everyone had sort of forgotten about, but now they’re coming back as a way of boxing it in.

I wrote a list of about 30 or 40 events like this over the last 50 years where, it was going to be the next big thing. And many of them have turned out to be utter duds. They’re useful, like the chess-playing programs in the ’90s. That was supposed to be the end of humans playing chess. No, it wasn’t the end of humans playing chess. Chess is a different game now and that’s interesting.

But just to articulate where I think the large language models come in: I think they’re going to be better than the Watson Jeopardy! program, which IBM said, “It’s going to solve medicine.” Didn’t at all. It was a total flop. I think it’s going to be better than that. But not AGI.

“A very famous senior person said, ‘Radiologists will be out of business before long.’ And people stopped enrolling in radiology specialties, and now there’s a shortage of them.”
—Rodney Brooks, Robust.AI

So what about these predictions that entire classes of employment will go away, paralegals, and so on? Is that a legitimate concern?

Brooks: You certainly hear these things. I was reviewing a government report a few weeks ago, and it said, “Lawyers are going to disappear in 10 years.” So I tracked it down and it was one barrister in England, who knew nothing about AI. He said, “Surely, if it’s this good, it’s going to get so much better that we’ll be out of jobs in 10 years.” There’s a lot of disaster hype. Someone suggests something and it gets amplified.

We saw that with radiologists. A very famous senior person said, “Radiologists will be out of business before long.” And people stopped enrolling in radiology specialties and now there’s a shortage of them. Same with truck driving…. There are so many ads from all these companies recruiting truck drivers because there’s not enough truck drivers, because three or four years ago, people were saying, “Truck driving is going to go away.”

In fact, six or seven years ago, there were predictions that we would have fully self-driving cars by now.

Brooks: Lots of predictions. CEOs of major auto companies were all saying by 2020 or 2021 or 2022, roughly.

Full self-driving, or level 5, still seems really far away. Or am I missing something?

Brooks: No. It is far away. I think the level-2 and level-3 stuff in cars is amazingly good now. If you get a brand-new car and pay good money for it, it’s pretty amazingly good. The level 5, or even level 4, not so much. I live in the city of San Francisco, and for almost a year now, I’ve been able to take rides after 10:30 p.m. and before 5:00 a.m., if it’s not a foggy day—I’ve been able to take rides in a Cruise vehicle with no driver. Just in the last few weeks, Cruise and Waymo got an agreement with the city where every day, I now see cars, as I’m driving during the day, with no driver in them.

GM supposedly lost $561 million on Cruise in just the first three months of this year.

Brooks: That’s how much cost it cost them to run that effort. Yeah. It’s a long way from breakeven. A long, long way from breakeven.

So I mean, I guess the question is, can even a company like GM get from here to there, where it’s throwing off huge profits?

Brooks: I wonder about that. We’ve seen a lot of the efforts shut down. It sort of didn’t make sense that there were so many different companies all trying to do it. Maybe, now that we’re merged down to one or two efforts and out of that, we’ll gradually get there. But here’s another case where the hype, I think, has slowed us down. In the ’90s, there was a lot of research, especially at Berkeley, about what sensors you could embed in freeways which would help cars drive without a driver paying attention. So putting sensors, changing the infrastructure, and changing the cars so they used that new infrastructure, you would get attentionless driving.

“One of the standard processes has four teraflops—four million million floating point operations a second on a piece of silicon that costs 5 bucks. It’s just mind-blowing, the amount of computation.”
—Rodney Brooks, Robust.AI

But then the hype came: “Oh no, we don’t even need that. It’s just going to be a few years and the cars will drive themselves. You don’t need to change infrastructure.” So we stopped changing infrastructure. And I think that slowed the whole autonomous vehicles for commuting down by at least 10, maybe 20 years. There’s a few companies starting to do it now again.

It takes a long time to make these things real.

I don’t really enjoy driving, so when I see these pictures from popular magazines in the 1950s of people sitting in bubble-dome cars, facing each other, four people enjoying themselves playing cards on the highway, count me in.

Brooks: Absolutely. And as a species, humanity, we have changed up our mobility infrastructure multiple times. In the early 1800s, it was steam trains. We had to do enormous changes to our infrastructure. We had to put flat rails right across countries. When we started adopting automobiles around the turn from the 19th to the 20th century, we changed the roads. We changed the laws. People could no longer walk in the middle of the road like they used to.

Black and white illustration of four people playing a tile game in a car which has a clear bubble top. The car is driving on a highway by itself. Hulton Archive/Getty Images

We changed the infrastructure. When you go from trains that are driven by a person to self-driving trains, such as we see in airports and a few out there, there’s a whole change in infrastructure so that you can’t possibly have a person walking on the tracks. We’ve tried to make this transition [to self-driving cars] without changing infrastructure. You always need to change infrastructure if you’re going to do a major change.

You recently wrote that there will be no viable robotics applications that will harness the serious power of GPTs in any meaningful way. But given that, is there some other avenue of AI development now that will prove more beneficial for robotics, or more transformative? Or alternatively, will AI and robotics kind of diverge for a while, while enormous resources are put on large language models?

Brooks: Well, let me give a very positive spin. There has been a transformation. It’s just taking a little longer to get there. Convolutional neural networks being able to label regions of an image. It’s not perceiving in the same way a person perceives, but we can label what’s there. Along with the end of Moore’s Law and Dennard scaling—this is allowing silicon designers to get outside of the idea of just a faster PC. And so now, we’re seeing very cheap pieces of very effective silicon that you put right with a camera. Instead of getting an image out, you now get labels out, labels of what’s there. And it’s pretty damn good. And it’s really cheap. So one of the standard processes has four teraflops—four million million floating point operations a second on a piece of silicon that costs 5 bucks. It’s just mind-blowing, the amount of computation.

It would narrow floating point, 16-bit floating point, being applied to this labelling. We’re not seeing that yet in many deployed robots, but a lot of people are using that, and building, experimenting, getting towards product. So there’s a case where AI, convolutional neural networks—which, by the way, applied to vision is 10 years old—is going to make a difference.

“Amazon really made life difficult for other suppliers by doing [robotics in the warehouse]. But 80 percent of warehouses in the U.S. have zero automation; only 5 percent are heavily automated.”
—Rodney Brooks, Robust.AI

And here’s one of my other “Seven Deadly Sins of AI Prediction.” It was how fast people think new stuff is going to be deployed. It takes a while to deploy it, especially when hardware is involved, because that’s just lots of stuff that all has to be balanced out. It takes time. Like the self-driving cars.

So of the major categories of robotics—warehouse robots, collaborative robots, manufacturing robots, autonomous vehicles—which are the most exciting right now to you or which of these subdisciplines has experienced the most rapid and interesting growth?

Brooks: Well, I’m personally working in warehouse robots for logistics.

And your last company did collaborative robots.

Brooks: Did collaborative robots in factories. That company was a beautiful artistic success, a financial failure, but—

This is Rethink Robotics.

Brooks: Rethink Robotics, but going on from that now, we were too early, and I made some dumb errors. I take responsibility for that. Some dumb errors in how we approached the market. But that whole thing is now—that’s going along. It’s going to take another 10 or 15 years.

Collaborative robots will.

Brooks: Collaborative robots, but that’s what people expect now. Robots don’t need to be in cages anymore. They can be out with humans. In warehouses, we’ve had more and more. You buy stuff at home, expect it to be delivered to your home. COVID accelerated that.

People expect it the same day now in some places.

Brooks: Well, Amazon really made life difficult for other suppliers by doing it. But 80 percent of warehouses in the U.S. have zero automation; only 5 percent are heavily automated.

And those are probably the largest.

Brooks: Yeah, they’re the big ones. Amazon has enormous numbers of those, for instance. But there’s a large number of warehouses which don’t have automation.

So these are medium-size warehouses?

Brooks: Yeah. 100,000 square feet, something of that sort, whereas the Amazon ones tend to be over a million square feet and you completely rebuild it [around the automation]. But these 80 percent are not going to get rebuilt. They have to adopt automation into an existing workflow and modify it over time. And there are a few companies that have been successful, and I think there’s a lot of room for other companies and other workflows.

“[Warehouse workers] are not subject to the whims of the automation. They get to take over. When the robot’s clearly doing something dumb, they can just grab it and move it, and it repairs.”
—Rodney Brooks, Robust.AI

So for your current company, Robust.AI, this is your target.

Brooks: That’s what we’re doing. Yeah.

So what is your vision? So you have a program—you have a software suite called Grace and you also have a hardware platform called Carter.

Brooks: Exactly. And let me say a few words about it. We start with the assumption that there are going to be people in the warehouses that we’re in. There’s going to be people for a long time. It’s not going to be lights-out, full automation because those 80 percent of warehouses are not going to rebuild the whole thing and put millions of dollars of equipment in. They’ll be gradually putting stuff in. So we’re trying to make our robots human-centered, we call it. They’re aware of people. They’re using convolutional neural networks to see that that’s a person, to see which way they’re facing, to see where their legs are, where their arms are. You can track that in real time, 30 frames a second, right at the camera. And knowing where people are, who they are. They are people not obstacles, so treating them with respect.

But then the magic of our robot is that it looks like a shopping cart. It’s got handlebars on it. If a person goes up and grabs it, it’s now a powered shopping cart or powered cart that they can move around. So [the warehouse workers] are not subject to the whims of the automation. They get to take over. When the robot’s clearly doing something dumb, they can just grab it and move it, and it repairs.

You are unusual for a technologist because you think broadly and widely, and you’re not afraid to have an opinion on things going on in the technical conversation. I mean, we’re living in really interesting times in this weird postpandemic world where lots of things seem to be at some sort of inflection. Are there any of these big projects now that fill you with hope and optimism? What are some big technological initiatives that give you hope or enthusiasm?

Brooks: Well, here’s one that I haven’t written about, but I’ve been aware of and following. Climate change makes farming more difficult, more uncertain. So there’s a lot of work on indoor farming, changing how we do farming from the way we’ve done it for the 10,000 years we, as a species, have been farming that we know about, to technology indoors, and combining it with genetic engineering of microbes, combining it with a lot of computation, machine learning, getting the control loops right. There’s some fantastic things at small scale right now, producing interesting, good food in ways that are so much cleaner, use so much less water, and give me hope that we will be able to have a viable food supply. Not just horrible gunk to eat, but actually stuff that we like, with a way smaller impact on our planet than farm animals have and the widespread use of fertilizer, polluting the water supplies. I think we can get to a good, clean system of providing food for billions of people. I’m really hopeful about that. I think there’s a lot of exciting things happening there. It’s going to take 10, 20, 30 years before it becomes commonplace, but already, in my local grocery store in San Francisco, I can buy lettuce that’s grown indoors. So we’re seeing leafy greens getting out to the mainstream already. There’s a whole lot more coming.

17 May. 2023


The war between Russia and Ukraine is making a lot of high-tech military systems look like so many gold-plated irrelevancies. That’s why both sides are relying increasingly on low-tech alternatives—dumb artillery shells instead of pricey missiles, and drones instead of fighter aircraft.

“This war is a war of drones, they are the super weapon here,” Anton Gerashchenko, an adviser to Ukraine’s minister of internal affairs, told Newsweek earlier this year.

In early May, Russia attributed explosions at the Kremlin to drones sent by Ukraine for the purpose of assassinating Vladimir Putin, the Russian leader. Ukraine denied the allegation. True, the mission to Moscow was ineffectual, but it is amazing that it could be managed at all.

Like fighter planes, military drones started cheap, then got expensive. Unlike the fighters, though, they got cheap again.

Drones fly slower than an F-35, carry a smaller payload, beckon ground fire, and last mere days before being shot out of the skies. But for the most part, the price is right: China’s DJI Mavic 3, used by both Russia and Ukraine for surveillance and for delivering bombs, goes for around US $2,000. You can get 55,000 of them for the price of a single F-35. Also, they’re much easier to maintain: When they break, you throw them out, and there’s no pilot to be paraded through the streets of the enemy capital.

Close up of a tablet screen shows a drone's eye view of a rural scene, with smoke rising from an area. Hands manipulate the DJI drone controller in front of it. Smoke clouds rise on a flat-screen monitor above a struck target, as a Ukrainian serviceman of the Adam tactical group operates a drone to spot Russian positions near the city of Bakhmut, Donetsk region, on 16 April 2023, amid the Russian invasion of Ukraine. Sergey Shestak/AFP/Getty Images

You can do a lot with 55,000 drones. Shovel them at the foe and one in five may make it through. Yoke them together and send them flocking like a murmuration of starlings, and they will overwhelm antiaircraft defenses. Even individually they can be formidable. One effective tactic is to have a drone “loiter” near a point where targets are expected to emerge, then dash in and drop a small bomb. Videos posted on social media purport to show Ukrainian remote operators dropping grenades on Russian troops or through the hatches of Russian armored vehicles. A drone gives a lot of bang for the buck, as utterly new weapons often do.

Over time, as a weapons system provokes countermeasures, their designers respond with improvements, and the gold-plating accumulates.

In 1938, a single British Spitfire cost £9,500 to produce, equivalent to about $1 million today. In the early 1950s the United States F-86 Sabre averaged about $250,000 apiece, about $3 million now. The F-35, today’s top-of-the-line U.S. fighter, starts at $110 million. Behold the modern-day fighter plane: the hypertrophied product of the longest arms race since the days of the dreadnought.

“In the year 2054, the entire defense budget will purchase just one aircraft,” wrote Norman Augustine, formerly Under Secretary of the Army, back in 1984. “This aircraft will have to be shared by the Air Force and Navy 3 1/2 days each per week except for leap year, when it will be made available to the Marines for the extra day.”

Like fighter planes, military drones started cheap, then got expensive. Unlike the fighters, though, they got cheap again.

“Sophisticated tech is more readily available, and with AI advances and the potential for swarms, there’s even more emphasis on quantity.”
—Kelly A. Greico, Stimson Center

Back in 1981, Israel sent modest contraptions sporting surveillance cameras in its war against Syria, to some effect. The U.S. military took hold of the concept, and in its hands, those simple drones morphed into Predators and Reapers, bomber-size machines that flew missions in Iraq and Afghanistan. Each cost millions of dollars (if not tens of millions). But a technologically powerful country needn’t count the cost; the United States certainly didn’t.

“We are a country of technologists, we love technological solutions,” says Kelly A. Grieco, a strategic analyst at the Stimson Center, a think tank in Washington, D.C. “It starts with the Cold War: Looking at the Soviet Union, their advantages were in numbers and in their close approach to Germany, the famous Fulda Gap. So we wanted technology to offset the Soviet numerical advantage.”

A lot of the cost in an F-35 can be traced to the stealth technology that lets it elude even very sophisticated radar. The dreadnoughts of old needed guns of ever-greater range—enough finally to shoot beyond the horizon—so that the other side couldn’t hold them at arm’s length and pepper them with shells the size of compact cars.

Arms races tend to shift when a long peacetime buildup finally ends, as it has in Ukraine.

“The character of war has moved back toward quantity mattering,” Grieco says. “Sophisticated tech is more readily available, and with AI advances and the potential for swarms, there’s even more emphasis on quantity.”

A recent research paper she wrote with U.S. Air Force Col. Maximilian K. Bremer notes that China has showcased such capabilities, “including a swarm test of 48 loitering munitions loaded with high-explosive warheads and launched from a truck and helicopter.”

What makes these things readily available—as the nuclear and stealth technologies were not—is the Fourth Industrial Revolution: 3D printing, easy wireless connections, AI, and the big data that AI consumes. These things are all out there, on the open market.

“You can’t gain the same advantage from simply possessing the technology,” Grieco says. “What will become more important will be how you use it.”

One example of how experience has changed use comes from the early days of the war in Ukraine. That country scored early successes with the Baykar Bayraktar TB2, a Turkish drone priced at an estimated at $5 million each, about one-sixth as much as the United States’ Reaper, which it broadly resembles. That’s not cheap, except by U.S. standards.

Right now the militaries of the world are working on ways to shoot down small drones with directed-energy weapons based on lasers or microwaves.

“The Bayraktar was extremely effective at first, but after Russia got its act together with air defense, they were not as effective by so large a margin,” says Zach Kallenborn, a military consultant associated with the Center for Strategic and International Studies, a think tank in Washington, D.C. That, he says, led both sides to move to masses of cheaper drones that get shot down so often they have a working life of maybe three to four days. So what? It’s a good cost-benefit ratio for drones as cheap as Ukraine’s DJIs and for Russia’s new equivalent, the Shahed-136, supplied by Iran.

Ukraine has also resorted to homemade drones as an alternative to long-range jet fighters and missiles, which Western donors have so far refused to provide. It recently launched such drones from its own territory to targets hundreds of kilometers inside Russia; Ukrainian officials said that they were working on a model that would fly about 1,000 kilometers.

Every military power is now staring at these numbers, not least the United States and China. If those two powers ever clash, it would likely be over Taiwan, which China says it will one day absorb and the United States says it will defend. Such a far-flung maritime arena would be very different from the close-in land war going on now in Eastern Europe. The current war may therefore not be a good guide to future ones.

“I don’t buy that drones will transform all of warfare. But even if they do, you’d need to get them all the way to Taiwan. And to do that you’d need [aircraft] carriers,” says Kallenborn. “And you’d need a way to communicate with drones. Relays are possible, but now satellites are key, so China’s first move might be to knock out satellites. There’s reason to doubt they would, though, because they need satellites, too.”

In every arms race there is always another step to take. Right now the militaries of the world are working on ways to shoot down small drones with directed-energy weapons based on lasers or microwaves. The marginal cost of a shot would be low—once you’ve amortized the expense of developing, making, and deploying such weapons systems.

Should such antidrone measures succeed, then succeeding generations of drones will be hardened against them. With gold plating.

16 May. 2023


We’ve been keeping track of Sanctuary AI for quite a while, mainly through the company’s YouTube videos that show the upper half of a dexterous humanoid performing a huge variety of complicated manipulation tasks, thanks to the teleoperation skills of a remote human pilot.

Despite a recent successful commercial deployment of the teleoperated system at a store in Canada (where it was able to complete 110 retail-related tasks), Sanctuary’s end goal is way, way past telepresence. The company describes itself as “on a mission to create the world’s-first humanlike intelligence in general-purpose robots.” That sounds extremely ambitious, depending on what you believe “humanlike intelligence” and “general-purpose robots” actually mean. But today, Sanctuary is unveiling something that indicates a substantial amount of progress toward this goal: Phoenix, a new bipedal humanoid robot designed to do manual (in the sense of hand-dependent) labor.


Sanctuary’s teleoperated humanoid is very capable, but teleoperation is of course not scalable in the way that even partial autonomy is. What all of this teleop has allowed Sanctuary to do is to collect lots and lots of data about how humans do stuff. The long-term plan is that some of those human manipulation skills can eventually be transferred to a very humanlike robot, which is the design concept underlying Phoenix.

Some specs from the press release:

  • Humanlike form and function: standing at 5’ 7” and weighing 155 pounds (70.3 kilograms)
  • A maximum payload of 55 pounds (24.9 kg)
  • A maximum speed of 3 miles per hour (4.8 kilometers per hour)
  • Industry-leading robotic hands with increased degrees of freedom (20 in total) that rival human hand dexterity and fine manipulation with proprietary haptic technology that mimics the sense of touch

The hardware looks very impressive, but you should take the press release with a grain of salt, as it claims that the control system (called Carbon) “enables Phoenix to think and act to complete tasks like a person.” That may be the goal, but the company is certainly not there yet. For example, Phoenix is not currently walking, and is mobile thanks to a small wheeled autonomous base. We’ll get into the legs a bit more later on, but Phoenix has a ways to go in terms of functionality. This is by no means a criticism—robots are superhard, and a useful and reliable general-purpose bipedal humanoid is super-duper hard. For Sanctuary, there’s a long road ahead, but they’ve got a map, and some snacks, and experienced folks in the driver’s seat, to extend that metaphor just a little too far.

Sanctuary

Sanctuary’s plan is to start with telepresence and use that as a foundation on which to iterate toward general-purpose autonomy. The first step actually doesn’t involve robots at all—it’s to sensorize humans and record their movements while they do useful stuff out in the world. The data collected that way are used to design effective teleoperated robots, and as those robots get pushed back out into the world to do a bunch of that same useful stuff under teleoperation, Sanctuary pays attention to what tasks or subtasks keep getting repeated over and over. Things like opening a door or grasping a handle are the first targets to transition from teleoperated to autonomous. Automating some of the human pilot’s duties significantly boosts their efficiency. From there, Sanctuary will combine those autonomous tasks into longer sequences to transition to more of a supervised autonomy model. Then, the company hopes, it will gradually achieve full automaton autonomy.

Sanctuary

What doesn’t really come through when you glance at Phoenix is just how unique Sanctuary’s philosophy on general-purpose humanoid robots is. All the talk about completing tasks like a person and humanlike intelligence—which honestly sounds a lot like the kind of meaningless hype you often find in breathless robotics press releases—is in fact a reflection of how Sanctuary thinks that humanoid robots should be designed and programmed to maximize their flexibility and usefulness.

To better understand this perspective, we spoke with Geordie Rose, Sanctuary AI founder and CEO.

Sanctuary has a unique approach to developing autonomous skills for humanoid robots. Can you describe what you’ve been working on for the past several years?

Geordie Rose: Our approach to general-purpose humanoid robots has two main steps. The first is high-quality teleoperation—a human pilot controlling a robot using a rig that transmits their physical movements to the robot, which moves in the same way. And the robot’s senses are transmitted back to the pilot as well. The reason why this is so important is that complex robots are very difficult to control, and if you want to get good data about accomplishing interesting tasks in the world, this is the gold star way to do that. We use that data in step two.

Step two is the automation of things that humans can do. This is a process, not an event. The way that we do it is by using a construct called a cognitive architecture, which is borrowed from cognitive science. It’s the idea that the way the human mind controls a human body is decomposable into parts, such as memory, motor control, visual cortex, and so on. When you’re engineering a control system for a robot, one of the things you can do is try to replicate each of those pieces in software to essentially try to emulate what cognitive scientists believe the human brain is doing. So, our cognitive control system is based on that premise, and the data that is collected in the first step of this process becomes examples that the cognitive system can learn from, just like you would learn from a teacher through demonstration.

The way the human mind evolved, and what it’s for, is to convert perception data of a certain kind into actions of a certain kind. So, the mind is kind of a machine that translates perception into action. If you want to build a mind, the obvious thing to do is to build a physical thing that collects the same kinds of sensory data and outputs the same kind of actuator data, so that you’re solving the same problems as the human brain solves. Our central thesis is that the shortest way to get to general intelligence of the human kind is via building a control system for a robot that shares the same sensory and action modes that we have as people.

What made you decide on this cognitive approach, as opposed to one that’s more optimized for how robots have historically been designed and programmed?

Rose: Our previous company, Kindred, went down that road. We used essentially the same kinds of control tactics as we’re using at Sanctuary, but specialized for particular robot morphologies that we designed for specific tasks. What we found was that by doing so, you shave off all of the generality because you don’t need it. There’s nothing wrong with developing a specialized tool, but we decided that that’s not what we wanted to do—we wanted to go for a more ambitious goal.

What we’re trying to do is build a truly general-purpose technology; general purpose in the sense of being able to do the sorts of things that you’d expect a person to be able to do in the course of doing work. For that approach, human morphology is ideal, because all of our tools and environments are built for us.

How humanoid is the right amount of humanoid for a humanoid robot that will be leveraging your cognitive architecture approach and using human data as a model?

Rose: The place where we started is to focus on the things that are clearly the most valuable for delivering work. So, those are (roughly in order) the hands, the sensory apparatus like vision and haptics and sound and so on, and the ability to locomote to get the hands to work. There are a lot of different kinds of design decisions to make that are underneath those primary ones, but the primary ones are about the physical form that is necessary to actually deliver value in the world. It’s almost a truism that humans are defined by our brains and opposable thumbs, so we focus mostly on brains and hands.

What about adding sensing systems that humans don’t have to make things easier for your robot, like a wrist camera?

Rose: The main reason that we wouldn’t do that is to preserve our engineering clarity. When we started the project five years ago, one of the things we’ve never wavered on is the model of what we’re trying to do, and that’s fidelity to the human form when it comes to delivering work. While there are gray areas, adding sensors like wrist cameras is not helpful, in the general case—it makes the machine worse. The kind of cognition that humans have is based on certain kinds of sensory arrays, so the way that we think about the world is built around the way that we sense and act in it. The thesis we’ve focused on is trying to build a humanlike intelligence in a humanlike body to do labor.

“We’re a technologically advanced civilization, why aren’t there more robots? We believe that robots have traditionally fallen into this specialization trap of building the simplest possible thing for the most specific possible job. But that’s not necessary. Technology is advanced to the point where it’s a legitimate thing to ask: Could you build a machine that can do everything a person can do? Our answer is yes.”
–Geordie Rose, Sanctuary founder and CEO

When you say artificial general intelligence or humanlike intelligence, how far would you extend that?

Rose: All the way. I’m not claiming anything about the difficulty of the problem, because I think nobody knows how difficult it will be. Our team has the stated intent of trying to build a control system for a robot that is in nearly all ways the same as the way the mind controls the body in a person. That is a very tall order, of course, but it was the fundamental motivation, under certain interpretations, for why the field of AI was started in the first place. This idea of building generality in problem solving, and being able to deal with unforeseen circumstances, is the central feature of living in the real world. All animals have to solve this problem, because the real world is dangerous and ever-changing and so on. So the control system for a squirrel or a human needs to be able to adapt to ever-changing and dangerous conditions, and a properly designed control system for a robot needs to do that as well.

And by the way, I’m not slighting animals, because animals like squirrels are massively more powerful in terms of what they can do than the best machines that we’ve ever built. There’s this idea, I think that people might have, that there’s a lot of difference between a squirrel and a person. But if you can build a squirrel-like robot, you can layer on all of the symbolic and other AI stuff on top of it so that it can react to the world and understand it while also doing useful labor.

So there’s a bigger gap right now between robots and squirrels, than there is between squirrels and humans?

Rose: Right now, there’s a bigger gap between robots and squirrels, but it’s closing quickly.

Aside from your overall approach of using humans as a model for your system, what are the reasons to put legs on a robot that’s intended to do labor?

Rose: In analyzing the role of legs in work, they do contribute to a lot of what we do in ways that are not completely obvious. Legs are nowhere near as important as hands, so in our strategy for rolling out the product, we’re perfectly fine using wheels. And I think wheels are a better solution to certain kinds of problems than legs are. But there are certain things where you do need legs, and so there are certain kinds of customers who have been adamant that legs are a requirement.

The way that I think about this is that legs are ultimately where you want to be if you want to cover all of the human experience. My view is that legs are currently lagging behind some of the other robotic hardware, but they’ll catch up. At some point in the not-too-distant future, there will be multiple folks who have built walking algorithms and so on that we can then use in our platform. So, for example, I think you’re familiar with Apptronik; we own part of that company. Part of the reason we made that investment was to use their legs if and when they can solve that problem.

From the commercial side, we can get away with not using legs for a while, and just use wheeled base systems to deliver hands to work. But ultimately, I would like to have legs as well.

How much of a gap is there between building a machine that is physically capable of doing useful tasks, and building a robot with the intelligence to autonomously do those tasks?

Rose: Something about robotics that I’ve always believed is that the thing that you’re looking at, the machine, is actually not the important part of the robot. The important part is the software, and that’s the hardest part of all of this. Building control systems that have the thing that we call intelligence still contains many deep mysteries.

The way that we’ve approached this is a layered one, where we begin by using teleoperation of the robots, which is an established technology that we’ve been working on for roughly a decade. That’s our fallback layer, and we’re building increasing layers of autonomy on top of that, so that eventually the system gets to the point of being fully autonomous. But that doesn’t happen in one go; it happens by adding layers of autonomy over time.

The problems in building a human-level AI are very, very deep and profound. I think they’re intimately connected to the problem of embodiment. My perspective is that you don’t get to general humanlike intelligence in software—that’s not the way that intelligence works. Intelligence is part of a process that converts perception into action in an embodied agent in the real world. And that’s the way we think about it: Intelligence is actually a thing that makes a body move, and if you don’t look at intelligence that way, you’ll never get to it. So, all of the problems of building artificial general intelligence, humanlike intelligence, are manifest inside of this control problem.

Building a true intelligence of the sort that lives inside a robot is a grand challenge. It’s a civilization-level challenge, but it’s the challenge that we’ve set for ourselves. This is the reason for the existence of this organization: to solve that problem, and then apply that to delivering labor.

15 May. 2023


Doreen Bogdan-Martin, secretary-general of the International Telecommunication Union, was named the recipient of this year’s IEEE President’s Award. She is being recognized for “distinguished leadership and contributions to the public.”

The IEEE member has championed global connectivity and digital inclusion for more than 30 years. Bogdan-Martin is the first woman to head the ITU, a U.N. agency headquartered in Geneva that helps set policy related to information and communication technology (ICT).

“It is my honor to recognize you as a transformational leader and an IEEE member for the commitment you made to bridge the digital divide globally,” Saifur Rahman, IEEE president and CEO, said in a news release about the award. IEEE sponsors the annual award.

Leading efforts to bridge the digital divide

Bogdan-Martin began her career in 1989 as a telecom policy specialist in the U.S. Department of Commerce’s National Telecommunications and Information Administration, in Washington, D.C. The agency advises the White House on telecommunications and information policy issues.

She left there after five years to join the newly created ITU telecommunication development sector as a policy analyst. The sector creates policies, regulations, training programs, and financial strategies for developing countries. She was promoted in 2005 to head the agency’s regulatory and market environment division, managing programs on regulatory reform, economics, and finance. She also advised governments on ICT reform and policy issues.

Three years later, Bogdan-Martin was appointed chief of the ITU’s Strategic Planning and Membership department, the most senior position in the general secretariat. She advised the secretary-general at the time, Hamadoun Touré. In addition she oversaw the organization’s membership, corporate communications, and external affairs departments.

In 2010 Bogdan-Martin helped establish the U.N. Broadband Commission for Sustainable Development, where she served as executive director, advocating for universal, affordable broadband. She helped create the ITU’s youth strategy, which aims to engage youngsters in the U.N.’s sustainable development agenda through programs and events. Its goals are to end poverty, protect the planet, and improve the prospects of communities around the world.

Bogdan-Martin organized the Equals Global Partnership for Gender Equality in the Digital Age and initiated a collaboration with UNICEF on the Giga initiative to connect schools to the Internet.

As an International Gender Champion, she works to break down barriers in science, technology, engineering, and mathematics. She is a member of the World Economic Forum 2030 vision leaders group and an affiliate of Harvard’s Berkman Klein Center, which studies cyberspace dynamics, norms, and standards. She is also an amateur radio operator, whose call sign is KD2JTX.

“I’m deeply humbled by this recognition,” Bogdan-Martin said in the news release about receiving the IEEE award. “I look forward to closely collaborating, cooperating, and strengthening the partnership between our institutions.”

She received the award on 5 May at the 2023 IEEE Vision, Innovation, and Challenges Summit and Honors Ceremony, held at the Hilton Atlanta.
15 May. 2023


The conventional way of adding a robot to your business is to pay someone else a lot of money to do it for you. While robots are a heck of a lot easier to program than they once were, they’re still kind of scary for nonroboticists, and efforts to make robotics more accessible to folks with software experience but not hardware experience haven’t really gotten anywhere. Obviously, there are all kinds of opportunities for robots (even simple robots) across all kinds of industries, but the barrier to entry is very high when the only way to realistically access those opportunities is to go through a system integrator. This may make sense for big companies, but for smaller businesses, it could be well out of reach.

Today, Intrinsic (the Alphabet company that acquired Open Robotics a little while back) is announcing its first product. Flowstate, in the word of Intrinsic’s press release, is “an intuitive, web-based developer environment to build robotic applications from concept to deployment.” We spoke with Intrinsic CEO Wendy Tan White along with Brian Gerkey, who directs the Open Robotics team at Intrinsic, to learn more about how Intrinsic hopes to use Flowstate to change industrial robotics development.

“Our mission is, in short, to democratize access to robotics. We’re making the ability to program intelligent robotic solutions as simple as standing up a website or mobile application.” —Wendy Tan White, Intrinsic CEO

To be honest, we’ve heard this sort of thing many times before: How robots will be easy now, and how you won’t need to be a roboticist (or hire a dedicated roboticist) to get them to do useful stuff. Robots have gotten somewhat easier over the years (even as they’ve gotten both more capable and more complicated), but this dream of every software developer also being able to develop robotics applications for robots hasn’t ever really materialized.

Intrinsic’s Flowstate developer environment is intended to take diverse robotic hardware and make it all programmable through one single accessible software system. If that sounds kind of like what Open Robotics’ Robot Operating System (ROS) does, well, that shouldn’t be much of a surprise. Here are some highlights from the press release:

  • Includes a graphical process builder that removes the need for extensive programming experience
  • Behavior trees make it easy to orchestrate complex process flows, authored through a flowchart-inspired graphical representation
  • Lay out a workcell and design a process in the same virtual environment, in the cloud or on-premise
  • Simulate and validate solutions in real time (using Gazebo) without touching a single piece of hardware
  • Encode domain knowledge in custom skills that can be used and reused, with basic skills like pose estimation, manipulation, force-based insertion, and path planning available at launch
  • Fully configured development environment provides clear APIs to contribute new skills to the platform

A screenshot of a robotics application development environment showing a flowchart and a work cell simulator Intrinsic’s Flowstate development environment.Intrinsic

Intrinisic’s industry partner on this for the last several years is Comau, an Italian automation company that you may not have heard of but apparently built the first robotic assembly line in 1979—if a Wikipedia article with a bad citation is to be believed. Anyway, Comau currently does a lot of robotic automation in the automotive industry, so it has been able to help Intrinsic make sure that Flowstate is real-world useful. The company will be showing it off at Automatica, if you happen to find yourself in Munich at the end of June.

For some additional background and context and details and all that good stuff, we had a chat with Wendy Tan White and Brian Gerkey.

Intrinsic is certainly not the first company to work toward making it easier to program and deploy robots. How is your approach different, and why is it going to work?

Wendy Tan White: One of the things that’s really important to make robotics accessible is agnosticism. In robotics, much of the hardware is proprietary and not very interoperable. We’re looking at bridging that. And then there’s also who can actually develop the applications. At the moment, it still takes even an integrator or a developer multiple types of software to actually build an application, or they have to build it from scratch themselves, and if you want to add anything more sophisticated like force feedback or vision, you need a specialist. What we’re looking to do with our product is to encapsulate all of that, so that whether you’re a process engineer or a software developer, you can launch an application much easier and much faster without repeatedly rebuilding the plumbing every time.

Not having to rebuild the plumbing with every new application has been one of the promises of ROS, though. So how is your tool actually solving this problem?

Brian Gerkey: ROS handles the agnosticism quite well—it gives you a lot of the developer tools that you need. What it doesn’t give you is an application building experience that’s approachable, unless you’re already a software engineer. What I said in the early days of ROS was that we want to make it possible for every software developer to build robot applications. And I think we got pretty close. Now, we’re going a step further and saying, actually, you don’t even need to be a programmer, because we can give you this low/no code type of experience where you can still access all of that underlying functionality and build a fairly complex robot application.

And then also, as you know with ROS, it gives you the toolbox, but deploying an application is basically on you: How are you actually going to roll it out? How do you tie it into a cloud system? How do you have simulation be in the loop as part of the iterative development experience, and then the continuous integration and testing experience? So, there’s a lot of room between ROS as it exists today and a fully integrated product that ties all that together.

White: Bluntly, this is going to be our first product release. So you’ll get a sense of all of that from the beginning, but my guess is that it’s not going to complete everybody’s needs through the whole pipeline straight away, although it will satisfy a subset of folks. And from there you’ll see what we’re going to add in.

Brian, is this getting closer to what your vision for making ROS accessible has always been?

Gerkey: There was always this sense that we never had the opportunity to take the platform as it is, as a set of tools, and really finish it. Like, turn up the level of professionalism and polish and really integrate it seamlessly into a product, which is frankly what you would expect out of most modern open source projects. As an independent entity, it was difficult to find the resources necessary to invest in that kind of effort. With Intrinsic, we have the opportunity now to do both things—we have the opportunity to invest more in the underlying core, which we’re doing, and we also get to go beyond that and tie it all together into a unified product vision. I want to be clear, though, that the product that we’re announcing next week will not be that, because in large part it’s a product that’s been built independently over the last several years and has a different heritage. We’ll incrementally bring in more components from the ROS ecosystem into the Intrinsic stack, and there will be things that are developed on the Intrinsic side that we will push back into the ROS community as open source.

White: The intention is very much to converge the Intrinsic platform and ROS over time. And as Brian said, I really hope that a lot of what we develop together will go back into open source.

“We believe in the need for a holistic platform. One that makes it more seamless to use different types of hardware and software together…a platform that will benefit everyone in the robotics and automation industry.” —Wendy Tan White, Intrinsic CEO

What should experienced ROS users be most excited about?

Gerkey: We’re going to provide ROS users an on-ramp to bring their existing ROS-based systems into the Intrinsic systems. What they’ll then be able to do that they can’t do today is, for example, using a Web-native graphical tool, design the process flow for a real-world industrial application. They’ll be able to integrate that with a cloud-hosted simulation that lets them iteratively test what they’re building as they develop it to confirm that it works. They’ll have a way to then run that application on real hardware, using the same interface. They’ll have a pipeline to then deploy that to an edge device. ROS lets you do a lot of that stuff today, but it doesn’t include the unified development experience nor the deployment end of things.

How are you going to convince other companies to work with you on this product?

White: At the beginning, when we spoke to OEMs [original equipment manufacturers] and integrators, they were like, “Hang on a minute, we like our business model, why would we open up our software to you?” But actually, they’re all finding that they can’t meet demand. They need better, more efficient ways to build solutions for their customers. There has been a shift, and now they want things like this.

Gerkey: I’d like to give credit as well to the ROS Industrial Consortium that has spent the last 10 years getting robot OEMs and integrators and customers to work together on common problems. Initially, people thought that there was no way that the robot manufacturers were going to participate: They have their own vertically integrated software solutions, and that’s what they want their customers to use. But in fact, there’s additional value from interoperability with other software ecosystems, and you can sell more robots if they’re more flexible and more usable.

With much of the functionality of your platform being dependent on skills, what is the incentive for people to share new skills that they develop?

White: We do intend to ultimately become a distribution platform. So, what we would expect is if people add skills to the platform, they will get compensated. We’re really creating a demand and supply marketplace, but we’re not starting there—our first product will be the solution builder itself, to prove that the value is there.

Gerkey: We’ve demonstrated that there’s huge potential to get people to share what they’re doing. Everyone has different motivations—could be karma, could be altruism, but sharing the engineering burden is the more rational reason to participate in the open source community. And then on top of all those potential motivations, here we’ve got the opportunity to set up this distribution channel where they can get paid as well.

And what’s the incentive for Intrinsic? How is this a business for you?

White: Initially there will be a developer license. What we’re looking for longer term as applications are built is a fee per application used, and ultimately per robot deployed. We have partners already who are willing to pay for this, so that’s how we know it’s a good place to start.

As we’ve pointed out, this is not the first attempt at making industrial robots easy to program for nonroboticists, nor is it the first attempt at launching a sort of robot app store. Having said that, if anyone can actually make this work, it sure seems like it would be this current combination of Intrinsic and Open Robotics.

If Flowstate seems interesting to you and you want to give it a try, you can apply to join the private beta here.

14 May. 2023


The British engineer James Wimshurst did not invent the machine that bears his name. But thanks to his many refinements to a distinctive type of electrostatic generator, we now have the Wimshurst influence machine.

What does a Wimshurst machine do?

Influence machines date back to the 18th century. They are a class of generator that converts mechanical work into electrostatic energy through induction. By the mid-19th century, the German physicists Wilhelm Holtz and August Toepler had each developed a model that featured rotating vertical glass disks. It was this style of generator that Wimshurst began tinkering with in his home workshop in the early 1880s. By 1883 he had solidified his design.

The Wimshurst machine as it exists today has two insulated disks, often made from plastic but sometimes still made from glass, with metal conducting plates positioned around the rims. The disks are mounted on a single axle and rotate in opposite directions when driven by a hand crank.

As the disks rotate, a small starting charge, either positive or negative, on one metal plate will move toward a double-ended brush on the second disk. When the plate aligns with the brush, it will induce an equal and opposite charge on the plate that’s directly across from it on the other disk. The resulting charge in turn causes an opposite charge on a plate on the first disk. Meanwhile, plates on the second disk induce charges on the first disk. Metal collector combs separate the charges into positive and negative and conduct them to two Leyden jar capacitors. The buildup eventually discharges with a spark that jumps the gap between two terminals, and the process begins again. A tabletop Wimshurst machine could produce up to 50,000 or 60,000 volts, as this video demonstrates:

Animate It - Wimshurst Machine www.youtube.com

The simple design was easy to reproduce and operate, and so Wimshurst machines found their way into laboratories, schools, and even the homes of well-to-do Victorians. Because of the high voltage they could produce, the machines were used to excite Crookes tubes and generate X-rays for medical imaging during the early 20th century.


When Wimshurst died suddenly at his home on 3 January 1903 at the age of 71, the editors at Nature found him worthy of an obituary. Twenty-nine years later, on the centenary of his birth, Nature again published a note calling him “among the best-known inventors of electrical machines of the last part of the nineteenth century.” And yet, to my knowledge, there’s no dedicated, full-length biography of the man. In fact, most online searches turn up the same set of details noted in Nature’s original reports: He was the son of shipbuilder Henry Wimshurst (the pioneering constructor of the screw-propelled ship Archimedes); apprentice at the Thames Iron Works; ship surveyor to Lloyd’s Register; from 1865 to 1874, chief of the Liverpool Underwriters’ Registry; and finally for the last 25 years of his working life, until reaching the mandatory retirement age of 67, principal shipwright surveyor for the Board of Trade.

Black and white photo of an older balding man with a beard and wearing a suit. James Wimshurst developed his eponymous machines in his spare time. By day, he was a shipwright surveyor for the British Board of Trade.Antônio Carlos M. de Queiroz/Wikipedia

Wimshurst’s electrical pursuits were entirely a hobby, something he did in his spare time at his house in Clapham in southwest London. With the assistance of his two sons, he created a laboratory and workshop where he tinkered with influence machines until he perfected his design. Wimshurst made more than 90 of his eponymous machines. Most fit easily on a tabletop, such as this one preserved at the Science Museum London, which measures 56 by 67 by 30.5 centimeters. But he also created one exceptionally large machine that’s 2.1 meters tall and on exhibit at the Museum of Science and Industry in Chicago. (The machine pictured at top is at the Yale Peabody Museum of Natural History, in New Haven, Conn. It was manufactured in Germany and sold in the United States by James W. Queen & Co.)

Wimshurst never patented any of his refinements to the machine, but he was eager to get the word out about his inventions. In 1886 he published a book, Static Electricity. The “Influence Machine”: How to Make It and How to Use It. And on 27 April 1888, he delivered a lecture on the machines at the Royal Institution. Recognized for his scientific achievements, in 1898 he was elected a fellow of the Royal Society. He was also a member of the Institution of Electrical Engineers and the Rӧntgen Society, and a member of the board of managers of the Royal Institution.

But the part of Wimshurst’s history I find most interesting is in the last paragraph of Nature’s obituary: “All Mr. Wimshurst’s scientific research was done for pure love of the work, and he persistently refused to accept any pecuniary benefit from it.” Perhaps that’s why historians haven’t yet written his biography: They don’t know how to treat a truly altruistic inventor.

Try this at home: an electric kiss

Wimshurst machines are readily available for purchase today. They’re still used in schools and science museums to demonstrate the basics of electricity. You can also build your own device with items from a hardware store.

In volume 17 of Make magazine (2009), the steampunk enthusiast known as Jake von Slatt described how to build a Wimshurst machine. (Von Slatt also appeared in IEEE Spectrum’s October 2008 article “The Steampunk Contraptors.”) His updated instructions are available online in a five-part series.

If you do decide to go all in on making your own Wimshurst machine, I suggest taking it to the logical next step of planning a dinner party around Victorian parlor tricks. Rather than breaking out Monopoly, Trivial Pursuit, or Parcheesi yet again, why not party like the Victorians, who were apparently mad about electricity.

To be clear, the Victorians were simply continuing a trend that dated back more than a century; Ben Franklin also liked to entertain his guests with electricity-based games. Thanks to the simplicity of the Wimshurst machine, partygoers in the 1890s had a reliable source of easily generated electricity.

One popular game was called the electrical kiss: Crank up your Wimshurst machine, get a courting couple to each take hold of one of the capacitors, and have them lean in for a surprise. It’s a little more shocking than spin the bottle. Although none of the literature I reviewed showed concern about the kissing couple accidentally electrocuting themselves, the website of Indiana University’s department of physics carries a warning about demonstrating its Wimshurst machine: “Can produce lethal shocks when connected to too many Leyden jars.”

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the June 2023 print issue as “Wimshurst’s Electrostatic Immortality.”

14 May. 2023


Vint Cerf, recipient of the 2023 IEEE Medal of Honor, has this advice to engineers starting out their careers:

  • “If you really want to do something big, get help, and preferably from people who are smarter than you are.”
  • “Be humble, because unless you approach things with the understanding that you really don’t know exactly how to make it all work, you may overlook possibilities.”
  • “Listen to other people. I tell my engineers that if they know I’m about to do something stupid, they have to tell me, so I don’t do it. And if they knew and didn’t tell me, that’s going to be reflected in their end-of-year fitness report. When you’re in a position of responsibility and authority, people may assume you’ve already figured out where the hazards are, but you may not have.”
  • “Try hard to stay on good terms with everybody. Civility is an important property, and burning bridges is generally a bad idea; you never know who you’re going to work with again, who you might work for, or who might work for you.”
  • “You can learn something from virtually everybody. One example: I was being driven in a limousine in Palm Springs by a white-haired guy. And I remember thinking, ‘This poor guy, it’s too bad. Here he is driving a limo. It’s nine o’clock at night. He ought to be just out there on the links playing golf and having a nice time.’ We struck up a conversation, and I find out that he actually did retire—from being the chief financial officer of one of the largest insurance companies in Chicago. He got bored playing golf, so he decided to drive a limo three times a week because he knew he was going to meet interesting people.”
13 May. 2023


The aviation business is nothing new for Ohio. The state is a major supplier for Airbus and Boeing and is home to around 150 airports. Back in 2003, the House even passed a resolution acknowledging the role of Dayton in America’s aviation history—noting that the Wright brothers were from the city.

But now, Ohio is racing to ensure it’s a major player in the next chapter of aviation history. The state is investing heavily in resources that it hopes will draw startups building drones, autonomous planes, and electric vertical take-off and landing (eVTOL) vehicles to its cities and airports. Last week, the Ohio Department of Transportation announced that it would begin using new software, sold by a company called CAL Analytics, for monitoring uncrewed aircraft in a bid to prepare for an influx of new futuristic vehicles to fly in the state.

“It is going to be so much cheaper than traditional aircraft flying.”
—Rich Fox, Ohio Unmanned Aircraft Systems Center

The move is part of a broader strategy. Last year, Ohio became the first state in the country to release an “advanced air mobility” framework, a massive effort to create infrastructure for supporting and regulating flying taxis that are powered by batteries and navigated by AI. Local officials are hoping that, eventually, these vehicles could help with delivering packages and transporting people to and from urban locations or even sparsely populated areas.

At the same time, officials are betting that by creating a framework for testing and developing these vehicles, Ohio can take a leading role in the future of aviation and set a model for other states and regional governments, too.

“The whole impetus behind building our infrastructure is to streamline the process for companies to come to Ohio,” Rich Fox, from the Ohio Unmanned Aircraft Systems Center, said. “It is going to be so much cheaper than traditional aircraft flying.”

Officials see the software from CAL Analytics—which previously received funding from the Ohio Federal Research Network — as a key next step. This software will help remote pilots operate uncrewed aircraft, and also assist the Ohio Department of Transportation with communications, surveillance, and infrastructure monitoring. The state plans to roll out the system at the National Advanced Air Mobility Center of Excellence, a new facility focused on eVTOL vehicles and other advanced air mobility, according to Fox, from the UAS Center.

That center, which broke ground last September, is based at the Springfield-Beckfield Municipal Airport and was funded by the Department of Defense, the city of Springfield, and JobsOhio, a state economic development agency.

Ohio’s Advanced Air Mobility Framework, which was released by the Ohio DOT last August, outlines where all these efforts are supposed to go. Officials imagine fleets of advanced aviation technologies, including remotely piloted and automated aircraft, delivery drones, and electric passenger vehicles, shuttling across the state. The hope is that these vehicles will make traveling short distances cheaper and more sustainable.

Eventually, officials will need to convince passengers to feel safe and comfortable actually riding these vehicles, which could be a significant undertaking.

There are real hurdles, though. Even die-hard proponents admit that eVTOLs are still years away at best, which means the companies working on this technology are making a big, and risky, financial bet. KittyHawk, the Larry Page–backed flying-taxi company, demonstrated its first beyond-visual-line-of-sight flight—a critical milestone for its tech—in Ohio, but then shut down this past fall. Officials also need to figure out how to collaborate with the federal agencies focused on regulating airspace and aviation, and particularly with the FAA. Eventually, they’ll need to convince passengers to feel safe and comfortable actually riding these vehicles, which could be a significant undertaking.

There has been progress. Today, several aviation startups are active in Ohio. Both Beta Technologies and Joby Aviation have used a simulator facility based at the Springfield airport. Springfield was also where the Austin-based Lift Aircraft eVTOL startup brought its first vehicle, its single-seat flying taxi, Hexa. Moog, an aerospace and defense company, has tested its two-seat SureFly eVTOL vehicle at the Cincinnati Lunken Airport, too.

Near Dayton, AFWERX, the Air Force’s startup incubator, uses the Wright-Patterson Air Force Base for advanced air-mobility work. NASA has conducted other advanced air-mobility work in the state, including studying the level of noise eVTOL aircraft produce as they travel.

Of course, other states are also trying to snag a piece of this future industry. New York now has an FAA-approved drone corridor, and the state has invested tens of millions in uncrewed aircraft. Companies have also tested eVTOLs in North Carolina, and NASA is investigating electric helicopters in Texas. Still, the idea is that investing heavily in this effort—and solving the real challenges of UAS and eVTOL systems—now will keep Ohio’s aviation history alive.

“We have to continue to develop the technologies,” Elaine Bryant, the executive vice president of aerospace and defense at the Dayton Development Coalition, said. “There’s a lot of research in AI and sensors and autonomy. All the things that will allow these vehicles to be efficient and allow us to get around and not just move people but move goods and services as well. “

13 May. 2023


The CHIPS and Science Act, aimed at kick-starting chip manufacturing in the United States, only began taking requests for pieces of its US $50 billion in March, but chipmakers were already gearing up beforehand. Memory and storage chipmaker Micron announced as much as $100 billion for a new plant in upstate New York. Taiwan Semiconductor Manufacturing Co. (TSMC), which was already building a $12 billion fab in Arizona, upped the investment to $40 billion with a second plant. Samsung is planning a $17 billion fab near Austin, Texas, and in September Intel broke ground on the first of two massive new facilities worth $20 billion in central Ohio.

Exciting as this is for the U.S. economy, there’s a potential problem: Where will the industry find the qualified workforce needed to run these plants and design the chips they’ll make? The United States today manufactures just 12 percent of the world’s chips, down from 37 percent in 1990, according to a September 2020 report by the Semiconductor Industry Association. Over those decades, experts say, semiconductor and hardware education has stagnated. But for the CHIPS Act to succeed, each fab will need hundreds of skilled engineers and technicians of all stripes, with training ranging from two-year associate degrees to Ph.D.s.

Engineering schools in the United States are now racing to produce that talent. Universities and community colleges are revamping their semiconductor-related curricula and forging strategic partnerships with one another and with industry to train the staff needed to run U.S. foundries. There were around 20,000 job openings in the semiconductor industry at the end of 2022, according to Peter Bermel, an electrical and computer engineering professor at Purdue University. “Even if there’s limited growth in this field, you’d need a minimum of 50,000 more hires in the next five years. We need to ramp up our efforts really quickly.”

Intel arrives at Ohio State

Two men and two women, all dressed in white jump suits with hoods stand in front of an orange background. Each holds a palm-sized black disc. Ohio State University is using its chip-fabrication facility to train future engineers and technicians. Here, from left to right, are OSU students Caleb Mallory and Jayne Griffith, manager of nanofabrication Aimee Price, and Columbus State Community College student Chris Staudt, who’s also on staff at OSU’s Nanotech West Laboratory.Peter Adams

The U.S. Midwest might be known more for farming and heavy industry than semiconductors, but chipmakers are betting it is fertile ground for their industry, thanks to an abundance of research universities and technical colleges.

Take Intel, which wants to create a “Silicon Heartland” in Ohio. In addition to building two cutting-edge chip factories on a 4-square-kilometer megasite that could hold six more fabs, the company has pledged $50 million to 80 higher-education institutions in the state. The funds should help the universities and community colleges upgrade their curricula, train and hire faculty, and provide equipment, and Intel also plans to provide internships, guidance, and research opportunities.

Part of those funds have gone to Ohio State University, which will lead a new interdisciplinary Center for Advanced Semiconductor Fabrication Research and Education that will span 10 in-state colleges and universities. While most of the semiconductor-related curriculum has been designed for students in electrical and computer engineering, OSU now wants to bring in students from other disciplines. The university is creating tracks for them to master semiconductor-related skills, and it’s revamping the curriculum in those disciplines to reflect the latest industry technology. Materials engineers will have new courses on chip packaging materials, industrial system engineers will learn semiconductor manufacturing processes, and mechanical engineers will get to know device fabrication tools, says Ayanna Howard, dean of OSU’s college of engineering. “Now that we’re bringing manufacturing back to [U.S.] shores, our curriculum is now bringing in all these components that have always been needed but haven’t been part of the plan at the scale required to train all these engineers.”

There is no shortage of talent in the region, Howard adds, since manufacturing is already a major activity in Ohio and other parts of the Midwest. In 2011, Ohio kicked off an initiative called Jobs Ohio to create more science, technology, engineering, and math (STEM) graduates in the areas of computer science, biotech, and health-care manufacturing. It’s now a matter of overhauling the curricula to cater to semiconductor manufacturing, she says. “When Intel came to the region, it really reinforced all the things that we had been thinking about.”

In addition to leading two projects with state colleges, OSU is collaborating with 10 other midwestern institutions, including Purdue and the University of Michigan, to “think about engineering education more holistically,” says Howard. “How do we create a curriculum that allows universities that might not have the infrastructure—say, lab space or trained faculty—to give students semiconductor experience?”

In the fall of 2021, for example, OSU piloted a course to teach students about chip-fabrication processes using desktop laboratory equipment, allowing them to learn without an expensive clean room. The engineering school is also teaming up with the creative arts department to create augmented-reality and virtual-reality tools that will let students experience a simulated fab.

SkyWater moves next door to Purdue

At top, a young woman in a blue lab coat manipulates an interface while looking at computer display. At bottom, a young man in a white lab coat looks at computer display while holding wires near some laboratory equipment. Graduate students Laura Chavez [top] and Marvin Zhang [above] use Purdue University’s Brick Nanotechnology Center to characterize semiconductors and ICs. SkyWater, a Minnesota-based foundry, is building a fab near the university. Top: Charles Jischke/Purdue University; Bottom: Rebecca Robiños/Purdue University

About 400 kilometers (250 miles) west of Intel’s development, another fab is planned. In July 2022, SkyWater Technology, a foundry that makes chips using specialty and mature manufacturing processes, announced a $1.8 billion chip fab at an industrial park in West Lafayette, Ind. Next door, Purdue has launched a new interdisciplinary Semiconductor Degrees Program to give undergraduate and graduate students a range of options for gaining core skills needed for the semiconductor industry. While EE courses traditionally cover integrated circuits and chip design, the new program teaches other key chip-manufacturing steps, including chemicals, materials, tools, manufacturing, packaging, and even supply-chain management. Students can choose to minor in the program, earn a master’s degree, or get a certification.

SkyWater representatives will inform students about various career options in the program’s introductory seminar course. Students are guaranteed experience in Purdue’s nanotechnology centers and at semiconductor companies. Advanced courses cover semiconductor materials and devices, as well as industrially relevant system-on-chip design. The program builds on Purdue’s SCALE program, funded by the U.S. Department of Defense and launched in 2020, which trains undergrads to design and build semiconductors for space. “SCALE is specific to defense microelectronics, but the SCALE and semiconductor-degrees programs are synergistic,” says Purdue’s Bermel. “Chips are fairly agnostic in many ways about the exact application space.”

This year, Purdue will kick off a new program aimed at educating workers for SkyWater. Supported by state and regional economic-development organizations, the program will include operator and technician training through associate degree courses at partner Ivy Tech Community College. Besides developing targeted coursework and internships with the company, the Purdue team plans outreach at local high schools about job opportunities at their neighboring fab, with the hopes of attracting more students to engineering.

A summer internship at SkyWater’s Florida foundry solidified Purdue undergraduate Anika Bhoopalam’s interest in the semiconductor industry. Bhoopalam, a senior majoring in chemical engineering with a minor in electrical and computer engineering, feels that the physics, materials science, and engineering courses she has taken, combined with research lab experience fabricating thin films and solar cells, have prepared her well. She plans to pursue a Ph.D. with a focus on materials science and solid-state physics so she can go on to work in chip manufacturing. “I found the semiconductor industry to be a fast-paced, exciting, and interesting field where you get to work on different tasks every day.”

Illinois ups its chip education game

In four images, three young men and a young woman work together with electronic laboratory equipment, on paper, and at a whiteboard. Undergraduate engineering students at the University of Illinois Urbana-Champaign take part in a course on designing and constructing ICs, which was once considered graduate-level work. Here, Jenna Cario [gray shirt], Andy Ng [plaid], Stanley Wu [white and blue] and Curtis Yu [in black] collaborate on various projects. Virgil Ward/University of Illinois Urbana-Champaign

Students traditionally develop, build, and test integrated circuits in graduate school. But universities are trying to provide that first hands-on experience earlier. In the fall of 2021, electrical and computer engineering professors Pavan Hanumolu and Rakesh Kumar at the University of Illinois Urbana-Champaign created a class called Advanced Systems Design, which leads senior-year undergrads through every step of making an integrated circuit.

“It’s a nitty-gritty job,” says Hanumolu. “Companies don’t want students learning this on the job. If we can provide these skills, that might shorten the route to increasing the talent pool for industry.”

Students work in teams, defining a problem and designing a functional CMOS integrated circuit to solve it. The designs are sent off to a TSMC fab for manufacture, after which students test the chips, redesign the circuit as needed, and create a printed circuit board for the chip. So far, 30 students have taken the class, Kumar says, and several have gone on to internships and jobs with companies like Apple, Intel, and Siemens. “Employers have appreciated the rigor our students go through,” Hanumolu says. “They know the kind of unique skill set these students will graduate with.”

Arizona State adds TSMC

Two people, one taller than the other, walk through a yellow-lit laboratory. They are wearing white jumpsuits, face masks, and head covering. Staff member Alex Cabrera [left] and student Monica Gaytan walk through a lab at the MacroTechnology Works research facility, where Arizona State University students work with fab equipment. Deanna Dent/Arizona State University

While cutting-edge chip fabs might be new in the Midwest, in the Southwest, Arizona State University has had a head start in preparing for an expanding U.S. chip industry. Motorola picked the deserts near Phoenix for its plant back in the 1960s. Aerospace and defense companies followed, and then Intel arrived in the 1980s and is currently expanding. Most recently, TSMC broke ground on its big U.S. fab complex in Phoenix in 2021. Connections to the companies form a solid foundation for semiconductor education and research at ASU, says Kyle Squires, dean of the engineering school.

“You can see in our DNA the origins from this semiconductor presence going back to the ’60s,” he says. A significant fraction of the EE faculty comes in with industry experience and has close ties with industry. They bring their expertise into the hardware engineering classes and labs they design, Squires says, while giving students access to scholarships, research opportunities, internships, and eventually jobs. “It’s a way for us to maintain currency with these companies, what they’re doing, where they’re headed. As technology needs continue to move, so does the curriculum. It’s research informing teaching, and vice versa. It’s a feedback loop.”

ASU boasts a large microelectronics facility—originally a Motorola semiconductor fab—and EE students in their junior and senior years can choose electives that give them direct semiconductor processing experience in the facility’s clean rooms. Graduate students, meanwhile, can pursue a 15-credit Certificate in Semiconductor Processing that trains them in various aspects of chip production. Squires acknowledges that putting multimillion-dollar tools into an undergrad lab is unrealistic for most universities, so forging relationships with industry partners can help make up the difference.

Community colleges could be key

Six white suited people wearing blue gloves and white head covering stand looking at objects on a long blue and white work surface. Community colleges will be key to filling the workforce needs of new fabs. Here, Maricopa Community College students work in an Intel-sponsored lab in Tempe, Ariz.Maricopa Community College/Intel

More than being a partner, Intel sees itself as a catalyst for upgrading the higher-education system to produce the workforce it needs, says the company’s director of university research collaboration, Gabriela Cruz Thompson. One of the few semiconductor companies still producing most of its wafers in the United States, Intel is expanding its fabs in Arizona, New Mexico, and Oregon. Of the 7,000 jobs created as a result, about 70 percent will be for people with two-year degrees, and 25 percent each for those with bachelor’s degrees, master’s degrees, and Ph.D.s.

Since COVID, however, Intel has struggled to find enough operators and technicians with two-year degrees to keep the foundries running. This makes community colleges a crucial piece of the microelectronics workforce puzzle, Thompson says. In Ohio, the company is giving most of its educational funds to technical and community colleges so they can add semiconductor-specific training to existing advanced manufacturing programs. Intel is also asking universities to provide hands-on clean-room experience to community college students.

Samsung and Silicon Labs in Austin are similarly investing in neighboring community colleges and technical schools via scholarships, summer internships, and mentorship programs. Samsung supports an initiative at Austin Community College that provides technician training for high school students. The company’s Fab Apprentice Program, meanwhile, allows students to complete their associate degree while working at Samsung two days a week. “We pay 100 percent of tuition and books as long as the student maintains a 3.0 GPA or higher,” says Michele Glaze, head of communications and community affairs at Samsung Austin Semiconductor.

Workforce shortages everywhere

Two people in white jump suits and face coverings look at folder they are both holding. Chip companies are struggling to find workers in South Korea, too. Here, Samsung employees work on a project during their job-related training.Samsung

The semiconductor talent shortage isn’t unique to U.S. shores. Taiwan makes about 65 percent of the world’s chips, but finding young semiconductor engineers has been getting difficult, according to reports. Semiconductor firms around the world are competing for talent: They’re hiking salaries and doling out scholarships, internships, and mentorships to undergraduates and even vocational high school students, in hopes of attracting them early. “As the need for advanced semiconductors continues to increase and chip manufacturers compete for talent, we see the supply in the workforce trailing the high demand,” Samsung’s Glaze says.

Samsung works with four major universities in South Korea, providing tailored curricula to train students in semiconductor R&D and manufacturing. Taiwan’s government, meanwhile, is partnering with chip companies to invest $300 million in specialized chip-focused graduate schools within top universities to train the next generation of semiconductor engineers.

Enticing more students to study engineering is a big problem, Intel’s Thompson says. Attractive jobs in the software industry have led to a shift in the balance between electrical engineering and computer science. “We hear from academics that we’re losing EE students to software,” she says. “But we also need the software. I think it’s a totality of ‘We need more students in STEM careers.’”

The CHIPS Act might just be what was needed to put semiconductors in the limelight and entice students to hardware-related degrees. At Purdue, Bermel says he has seen an uptick in interest in the semiconductor information session at the annual September career fair. Historically, the fair has had a handful of semiconductor employers, but it had 28 this year and attracted over 600 students. For the semiconductor industry to be more successful going forward, the software industry’s practices might be worth following, he says. That includes “providing better opportunities for students even after only the first year of undergrad if possible, paying them very well, but also making it more evident to the general public why semiconductor companies are important.”

BACK TO TOP