A New Orbit: How Project Teams Are Using AI in Space

Transcript


Artificial intelligence (AI) is making one giant leap in the space industry. In celebration of World Space Week, we’re highlighting how project teams are tapping AI for space exploration and innovation.

Dave Evans, the OPS-SAT Space Lab manager at the European Space Agency in Darmstadt, Germany, discusses the OPS-SAT Space Lab service and how his team—and others from agencies across the globe—used the satellite to test new tech, including AI, in orbit. Evans also talks about the space industry’s interest in AI.

Dave Salvagnini, chief data officer and chief artificial intelligence officer at NASA in Washington, D.C., shares how the U.S. space agency ensures ethical, responsible AI use across teams, different ways NASA is using AI on projects, and how he anticipates such advanced digital tools will transform the space sector in the years ahead.


STEVE HENDERSHOT 
When today’s satellites observe, calculate, analyze and predict from way above, it should come as no surprise that some of them are getting a boost from algorithms.

But that’s just one way artificial intelligence (AI) is making one giant leap across the space industry.

Today, let’s explore how AI could propel the next wave of missions—perhaps in ways we’ve yet to imagine.

In today’s fast-paced and complex business landscape, project professionals lead the way, delivering value while tackling critical challenges and embracing innovative ways of working. On Projectified®, we bring you insights from the project management community to help you thrive in this evolving world of work through real-world stories and strategies, inspiring you to advance your career and make a positive impact.

This is Projectified. I’m Steve Hendershot.

Space teams around the world are taking AI to a new dimension.

The Indian Space Research Organisation’s first successful moon landing in 2023 used AI-powered sensors to ensure a smooth lunar touchdown.

The Canadian Space Agency and its partners are developing the latest generation of a robotic arm that uses cutting-edge software to perform some tasks autonomously. The system will be used in the NASA-led Gateway program, which will establish the first space station around the moon.

And teams at NASA’s Goddard Space Flight Center are training a machine learning algorithm to help researchers quickly analyze data from rover samples—so scientists can plan the best use of a rover’s time on another planet. That algorithm will be put to the test on a European Space Agency-led mission—scheduled to launch no earlier than 2028—when the Rosalind Franklin rover will seek to discover whether life ever existed on Mars.

PMI has plenty of resources on how AI can deliver value in your projects. Go to PMI.org/podcast and click on the transcript for this episode.

Now, let’s hear from space project leaders in this AI orbit. We start with Dave Evans, the European Space Agency’s OPS-SAT Space Lab manager at the European Space Operations Center in Darmstadt, Germany. OPS-SAT is a “flying laboratory” that allows scientists to experiment with technologies on a satellite in space without taking on high levels of risk if experiments don’t succeed.

MUSICAL TRANSITION

STEVE HENDERSHOT 
Dave, thanks so much for joining us. Let’s start with a primer on what OPS-SAT is and how it has allowed companies and international agencies to experiment in space.

DAVE EVANS 
It’s actually a service called the OPS-SAT Space Lab service. It’s run by ESA, the European Space Agency. The whole idea is to provide access to industry institutions to give them a fast, cost-free, nonbureaucratic process to try out their firmware and software in space. NASA JPL (Jet Propulsion Laboratory), JAXA (Japan Aerospace Exploration Agency), the Japanese space agency, the German space agency (DLR), French space agency (CNES), they all flew experiments on OPS-SAT. European commission also flew things.

The basic idea of why to do this is because it’s very difficult to get access in space to do experimentation. You have missions that cost a billion dollars. You’re not going to experiment on such satellites. So the idea came about to actually create a satellite where this is actually possible. We’ve just finished one mission called OPS-SAT 1, and we have others in progress now in development, which will all go under this banner of providing this service. Now, if you think about it as a place where you can do things on your satellite, which you will never be allowed to do on your own satellite, then AI is one of those things that immediately springs to mind. It’s some of the things that we have definitely tried out. So, OPS-SAT 1, in the end, we ended up with 134 different experimenters, 284 different types of experiments, and I would estimate that 30% of those had some AI component.

STEVE HENDERSHOT 
How difficult was it to get support for this? In some ways, OPS-SAT involves a project management problem—“How do we allow for innovation and experimentation while also mitigating risk in a high-cost environment?” So how did that work? 

DAVE EVANS
Yeah. Well, it wasn’t easy. That’s the first thing. We came up with this crazy idea of, “We should try and do experiments in space on mission-critical subsystems.” So this wasn’t received very well by everybody in the organization. There were quite a few people saying, “You shouldn’t do experiments in space. These things should be done on ground,” for instance. But we’ve persevered and came up with a design. So you have to design the satellites to be safe. What we did is designed the whole system, the ground and the space elements, in a way where we didn’t put so much emphasis on preventing things, bad things, from happening, more that if they did happen, we could recover from it.

The experimenter community—I would say it’s a community—just grew and grew over time. The experiments also got crazier and crazier, and they also built on them so that the experimenters would help each other. They were all interested in having their software or firmware used. So one would build, I don’t know, something like an AI application to sort pictures out. And then the next one would use a compression algorithm to compress those pictures. And another might use a secure application to download those pictures to the ground in a secure way. The whole thing kind of chained together and exploded, really. We were executing experiments to the very last moment. 

How to experiment with AI—and use it in the right projects

STEVE HENDERSHOT 
Let’s talk more about that AI photo experiment, because that’s one your team was part of. Tell us how your team in particular started using AI on the satellite. 

DAVE EVANS 
So I was not an AI fan or adopter or anything like this, but I had an intern, George. I was explaining to him proudly how we would download pictures on OPS-SAT. We would first make thumbnails, so smaller pictures, download those, and then look at them on the ground and then decide which ones were bad and which ones were good, and then simply delete the bad ones on board and just download the good ones. And I was proudly showing him how this was saving us how much bandwidth and efficiency and everything, and George said, “Oh, you can do that with AI.” Well, I was a little bit skeptical, and I said, “Okay, George, you’ve got two weeks. Come back in two weeks, and we’ll talk about it where you are in your project”—fully expecting that in two weeks he would come back with no progress at all. In two weeks, George came back with a framework from Google. He trained a model on the ground using all the thumbnails that we had up to then, which was 5,000 pictures. And he loaded that model to the spacecraft and [had] done some tests, and he was achieving 95% accuracy in sorting the good pictures from the bad pictures, and then later into different types, like sea, land, edge, things like this. And I was dumbfounded. I asked him how much coding he’d done. He said very little coding. He just followed the instructions and had done this, and he was achieving 95%, which for my particular application was wonderful. I immediately adopted it into the operations concept.

We called it smart cam. We used smart cam from then onwards to sort the pictures out on board. It was a real success, and we told people about this, then other experimenters started building on top of that. So I would say that we became unwittingly adopters of AI because it solved some of our operational problems, and, like I say, I was completely astounded at the results.

STEVE HENDERSHOT 
To get to 95%, you trained the algorithm on an initial set of 5,000 images. Could you get to 99.9% just by additional training of the misses and hits it has had since then? Or is this a situation where someone else is going to rewrite the algorithm to solve for that last 5%?

DAVE EVANS 
Maybe. I mean, definitely you hit on one of the main problems—the AI’s only [as] good as the quality of the data set that you give it to train on. It really depends on the problem you’re trying to solve. One example is trying to work out when the satellite is misbehaving. So AI is definitely a potential for that, but there are very few occurrences of when the satellite is really misbehaving, even on OPS-SAT. Then you don’t get the quality of data set you need to train the models. So this is definitely one issue that needs to be faced.

STEVE HENDERSHOT 
On the idea of tech misbehaving, OPS-SAT was created with recoverability in mind. Did any experiments—AI or others—necessitate that functionality to recover after a failed experiment?

DAVE EVANS 
No, it was a smooth implementation. We never had a problem caused by AI. Other experiments did cause problems, but they were more to do with like when the satellite’s trying to rotate and communication protocols, things like this. But AI itself was never an issue, shall we say. You’ve got to remember also that we fly in a processor, which is very powerful, much more powerful than a normal spacecraft commander control processer would fly, and we could load these applications frameworks out of the box onto the spacecraft. So one must take that into account that normally a spacecraft that’s going to Jupiter would not fly a processor as powerful as we had.

STEVE HENDERSHOT 
But theoretically, if you were able to deploy something in a software like that and it holds up, then you could load the same program on the satellite headed to Jupiter.

DAVE EVANS 
Yes, yes. Absolutely, but it does bring me to the issue of AI adoption on board though, because 95% was, for us, a complete win. Our problem was solved, and we were willing to give up 5% of our data for ease of operations, but if you send a satellite to Jupiter, you’re not going to do that. I think that’s what I took out of it. The AI, it was amazing how quickly, with not much trouble, something could be adopted that worked at 95%, but it wasn’t anywhere near the operational levels of accuracy that you require for a big mission or a mission that isn’t just doing technology demonstration. It’s an amazing thing. Really, really amazing, but not quite there where you would use it for safety of life or something where it’s going to cost you a lot of money if you lose that sort of data.

STEVE HENDERSHOT 
What areas of space exploration do you see AI being a good fit for, given its current abilities? 

DAVE EVANS 
Well, I think it could be really useful on the ground to start with. Normally, the technology on the ground makes a step. And then, after a while, you put it on board, once you have full confidence and understand it. Again, there’s problems there to be overcome. I mean, one is assisting the ground engineers or the spacecraft engineers controlling the spacecraft in order to help them to make decisions. It’s been worked on at the moment by ESA and has quite a high adoption rate; 50% of the missions here at the European Operation Center are adopting a form of AI assistant to help them. But it’s like having an assistant that’s very fast, very enthusiastic, but doesn’t have the experience. So you have to check the work of the assistant. And just like when you have an assistant, you may find some interesting aspect which you didn’t think about before, because the assistant is so good at looking at all the different data sources and finds something unique. But at the same time, it can also make very big mistakes if you just let them make the big decisions. So I think that could be a very good step to start with.

Why the space industry views AI with heavy interest—and healthy skepticism

STEVE HENDERSHOT 
What’s the wider conversation like within the space sector? Has there been pushback around the idea of wider uses for AI? Or is there a lot of willingness and enthusiasm to try? 

DAVE EVANS
There is a fascination with AI definitely, also within spacecraft operations. People are interested in this. There’s also a lot of investment going on around the world, and the space industry’s no different than that. But of course, when you’re operating a spacecraft, you have to be really, really sure because the cost of a small error can be enormous. This is what makes it so special. So there’s a lot of skepticism about the adoption to make those big decisions alone. I think people are having the same experience I did, but in a different context. So the scene, yeah, it’s interesting what’s coming out. It’s not always right, but people can see potential, but not quite where you think, “Okay, right, I can rely on this.”

The first thing that’s got to happen is you’ve got to work on the data. Like what I said before, the AI is only good as the data. Think about George as an assistant who can go and look at all the different archives and all the records and the telemetry and the anomaly reports instantly. And he can get all this information together, but he needs some way to put this together in a way which makes sense in order to do the analysis. That isn’t true yet, because we always assume that there’s a person going to look at that. It isn’t labeled for a computer to read. There’s lots of data there, but it isn’t quite prepared—ready for AI, if you like.

And I think this is perhaps true of many, many things. If you don’t label the data correctly or give it the relevant context, then you’ll just disturb the AI. So I think if you want to get it to very high levels of usefulness and adoption, then the data has got to be arranged correctly and connected up. So I think that’s the first step, and it will get better. That’s the nature of the world. The technology always gets better. I’ll be interested to see how fast it gets better. I think it’s coming, and I wouldn’t mind an AI assistant helping me.

MUSICAL TRANSITION

STEVE HENDERSHOT 
Are you enjoying this episode? Please leave us a rating or review on Apple Podcasts, Spotify or wherever you listen. Your feedback helps us keep making this show.

Now, let’s go to our next conversation. It’s with Dave Salvagnini, the chief data officer and chief artificial intelligence officer at NASA in Washington, D.C. Dave’s AI role began in May 2024. He spoke with Projectified’s Hannah LaBelle about how he’s helping the U.S. space agency apply AI responsibly. He also shares a few ways NASA is using the tech across its project portfolio.

MUSICAL TRANSITION

HANNAH LABELLE 
Dave, it’s great to speak with you today. You’ve been with NASA now for over a year and recently were named chief AI officer. So tell me a little bit about the responsibilities that this position entails.

DAVE SALVAGNINI 
The position really is established as a result of an executive order that was released from the Biden administration, which required federal agencies to establish chief artificial intelligence officers, largely for a couple reasons. One was to protect [the] interests of U.S. citizens from a privacy perspective, to ensure that the use of AI in the federal government is free of bias and is not in any way harming our population, ensuring the privacy concerns are addressed in the use of AI. And then transparency. So my role really is looking at NASA and how we are ethically, responsibly, transparently using AI.

Our mission is quite research-heavy, certainly in the area of aeronautics and space exploration, human space flight, climate, and other areas of science. I do have to make sure that the responsibilities in the federal guidelines are satisfied, things like not only having a chief AI officer but also establishing a governance function within the agency that reports at a very senior level, in NASA’s case to the deputy administrator maintaining an inventory of what AI work is ongoing. And that’s a perpetual inventory. In other words, even if two years ago we piloted an AI capability, determined that it wasn’t viable and took it out of service, we would maintain a record of that in our AI registry. And then as new capabilities come along, we would add those to our registry as well. And the key purpose of that is, well, understanding what AI work is going on across the agency, but also another ancillary benefit to that is making sure that other parts of the organization can benefit from awareness. So the role, I often refer to it as, is looking after not only the compliance requirements from the administration but also doing what’s right for NASA, and what’s right for NASA is maybe coordinating and orchestrating our AI journey across all of the various disciplines where AI may be used. And, of course, the new area is a large focus on generative AI.

How the U.S. space agency is using AI on projects

HANNAH LABELLE 
We’ve definitely seen the generative AI boom over the last few years. More day-to-day tech tools that teams use have some sort of generative AI function. How does that particular aspect fit into your work across NASA?

DAVE SALVAGNINI 
We see far more people than have traditionally been involved in AI use accessing AI tools and starting to leverage the capabilities of those tools across the NASA workforce. Orchestration, coordination, awareness, but also preparing the workforce at NASA to be able to safely and responsibly use the tools. Because what is happening now is AI is being taken out of the hands of experts, whether they’re modeling climate change parameters from various different Earth- or space-based sensors, whether they are building autonomous systems that are part of space exploration, those are very embedded, very specialized use cases.

You now expand to the entire workforce, where everyone has access to maybe a Copilot from Microsoft or a ChatGPT and making sure that they’re able to responsibly use those tools, and also safeguard NASA information that we may not want out in the wild. Obviously, it’s emerging because this space is emerging as well with the rapid developments, in the generative AI arena and the incorporation of AI capabilities into so many commercial products that are used by most organizations on a day-to-day basis.

HANNAH LABELLE 
So you’re seeing more team members have access to and use these types of AI tools, but NASA has been using AI on projects over the last few decades. Can you share a couple of project examples?

DAVE SALVAGNINI
I’ll give you a couple examples in the area of science, but in particular with climate. Using space-based imagery, NASA was able to identify forests in sub-Saharan Africa. The development of a forest, or a cluster of trees that are mature, indicates a change in the climate within that region over time. You know, it’s not a single season, it’s actually a change in climate over time, a trend. So this is an example of, let’s say, AI’s strength in the area of pattern recognition, to classify and identify a forest using large amounts of space-based imagery.

Another one is, we were able to take data that was collected over years by two different sensors—it was over 11 years—to validate the existence of exoplanets. Out of a data set of 2,600 objects, over 300 were identified as exoplanets. But what’s interesting about that is using AI as compared to human classification mechanisms that have traditionally [been] employed was certainly faster, orders of magnitude faster, and also more accurate.  What we found is with human or traditional classification means, we were in the 70% accuracy range, and we actually went into the 90% accuracy range using AI. This data set that I’m referring to, we stopped collecting that data in 2018, but being able to apply AI that’s available now that wasn’t necessarily as well-developed or mature then has allowed us to use an existing data set from the past and learn new things.

We see use cases where NASA is doing research in aeronautics, and we’re optimizing, let’s say, the routing of air traffic. Think about, well, when would a plane that’s destined to go from a departure point to a destination, when would it be best for that plane to actually take off based on the air traffic control system, based on flows and so on? And what we found is that we can significantly reduce the number of emissions by more efficiently handling that traffic. And in doing so, fewer flights, fewer CO2 emissions and so on. And this is all based on route optimization using AI.

And then I’ll give you the last example in the area of life sciences. We have life sciences data from NASA missions going back to 1961, many, many terabytes of data. And we’ve put natural language query capabilities on top of all of that data. We’re able to classify that data in a way that makes retrieval of information about astronaut health and about other biological research much more easy to parse through and gain insight from. In the area of considering the effects on biological systems and people in space, we’re able to pull from that research. And by the way, that research is not only NASA-based, but it’s also from other countries and research institutions and so on. So just think about the power of AI in helping you draw conclusions and see connections between different research aspects as it relates to a particular medical or biological topic.

HANNAH LABELLE 
I think those are all fascinating, especially given kind of the breadth of departments or units that are using AI for different things. I want to come back to that ethics portion that you talked about earlier with your role. Given how many people across the organization might be using AI tools, how are you ensuring that you’re integrating AI responsibly, considering the ethics, risk, different things like that, on projects across NASA’s portfolio?

DAVE SALVAGNINI 
Now you’re dealing with workforce behavior, right, and helping people understand what the safe guardrails are for the use of these tools. So I think first and foremost is awareness, making sure that people understand there’s a lot of goodness in a lot of the generative AI tools that are readily available, but there are some pitfalls. So making them keenly aware of those pitfalls. We have a large campaign going on right now. It’s been a surge of training activity, with lunch-and-learns going on multiple days each week. NASA centers having all-day AI events where they’re bringing in external speakers. And we’ve got a vast amount of online training content that’s available to the workforce as well. We’ve reached just under one-third of the entire NASA workforce. It’s a multimodal approach to presenting training content. And this is really about raising awareness, not only in what some of the safe use parameters happen to be, but also some of the opportunity space. It’s also meant to create curiosity and have people start thinking about where could they potentially employ AI.

The other thing is giving people access to experts whom they can consult with to make sure that the various different piloting activities or other initiatives that they may want to pursue are viable and are consistent with NASA policies. We have a grassroots-led AI/ML (/machine learning) consultation team that has initiated an effort to advise organizations looking to pursue AI pilots. And, of course, they’re aware of the safeguards and some of the policies and some of the pitfalls as well. So you train and you make people aware, you then give them access to experts who have been working in this field for quite some time.

And then, lastly I would say we are building policies specific to some of the risks associated with AI. And I’ll give you an example. When our creatives team came to us and said, “Hey, we’re concerned about the effects of generative AI, and if people, for example, start using DALL-E to create images on NASA-branded products, we have grave concern about that because the quality of those images is suspect.” You can ask DALL-E to give you a graphic or an image of an astronaut, let’s say on the surface of the moon. And if you look at that image, lots of times, you’ll see defects in the astronaut’s spacesuit, you’ll see defects in, let’s say a flag, like an American flag where there’s not enough stars, not enough stripes, it may be upside down, a whole host of other things. So putting policies in place to say, “Well, no, using DALL-E, for example, as a tool for the creation of images that are going to go on NASA-branded products is a concern for a number of reasons. One, we don’t know what the sources are, and we could be infringing copyrights, but secondarily, there are also quality concerns associated with it.”

HANNAH LABELLE 
Absolutely. You’ve obviously been reaching a lot of NASA team members. What’s the organization’s overall attitude toward using AI? Obviously, folks are talking about it, there’s definitely concerns being brought up, things being talked through.

DAVE SALVAGNINI 
I would characterize the pulse as very, very excited about the opportunities, with a degree of, a healthy dose of, I would say, some caution. So think about NASA for a moment and think about the kinds of things we do; managing risk is very much part of how we build systems and how we do space exploration. Whether it’s an unmanned or a manned mission, managing risk is how this organization has matured and been able to do what it does. So we certainly have people who look at AI, look at all the possibilities, don’t want to impede innovation, but by the same token, they want to make sure that risk management measures are in place so that, for example, an employee isn’t blindly following the output, I should say, of an AI algorithm, basically abdicating their accountability for its accuracy. I think lots of optimism, lots of excitement, but by the same token, there’s certainly a recognition that we have to be smart, measured, and we have to manage the risk of maybe some of the unintended consequences of AI use if we’re not being responsible and informed in how we employ these capabilities.

How AI could transform space projects

HANNAH LABELLE 
How do you anticipate AI will transform projects in the space sector looking at the years ahead? What are the risks or challenges that you see space organizations facing, and what opportunities do you see the technology bringing to space exploration, especially thinking about deep space exploration, autonomous missions, anything like that, kind of what’s to come?

DAVE SALVAGNINI 
I would characterize the value that AI is going to bring to the space industry, writ large, in a number of ways. Let’s just say any organization is going to benefit from AI in its support functions. When you think about the budgeting, the finance, the performance reporting, the contracting activities, the facilities work, the logistical pipelines, all of those activities, when you think about IT and cyber, will benefit from what AI brings. So mission support, I think we’re going to see efficiencies generated because as a workforce finds ways to use generative AI, for example, that’s going to have a huge dividend for everything that any organization does.

Then I would say there’s the caretaking of systems. If you think about a habitat, for example, on the surface of the moon, and you think about how AI can be part of assuring the safety of that habitat, assuring that maybe we put in place some self-healing capabilities within that habitat so that, if certain parameters start to be exceeded in a manner that was unexpected, then the AI can perhaps take measures to address the anomaly. You talked about autonomous systems, and you think about a space vehicle and just the care and feeding of all the systems on that space vehicle and being able to, again, assure its resiliency in what are extremely harsh environments. But not only its resiliency, its ability to self-heal.

I think the other thing would be augmenting crew. So thinking about, let’s say, a mission to Mars where the time to get to Mars is months. Thinking about, well, what if there happened to be a medical incident that occurs with the crew, and maybe they’re unable to communicate with mission control for one reason or another, given where they are on that journey. Having access to a repository of content that they can use to help diagnose a medical condition during mission is yet another way. It’s going to enhance crew health, safety. It’s going to enhance mission effectiveness. And I think, quite candidly, we’re going to see benefits in our future that we may not have even imagined or realized at this point as we continue to go forward.

STEVE HENDERSHOT

Thanks for listening to Projectified. Like what you heard? Subscribe to the show on your favorite podcast platform and leave us a rating or review. Your feedback matters a lot—it helps us get the support we need to continue making this show. And be sure to visit us online at PMI.org/podcast, where you’ll find the full transcripts for episodes as well as links to related content like useful tools and templates, the latest research reports, and more. Catch you next time!

You Might Also Like...

  • Artificial Intelligence in Project Management | Explore
  • Transforming Project Management With Generative AI Innovation | Read
  • Career Development: How Project Managers Can Thrive with AI | Listen
  • The Power of the Prompt: GenAI Techniques, Skills, and Strategies for Project Professionals | Download