Schmidt Futures - and, in particular, Ashwin Ramaswami - prepared this report on the advisability of setting up a federal OSPO. I reviewed it before it was published, so Sustain got a byline, which is nice.
I had no idea that this was report was about OPSOs. There is going to be a lot of problems for ALL OSPO++ orgs if they are funded under the auspices for improving cybersecurity outcomes, but then people realize that is only 1/3 of the purpose of these organizations.
I also agree that there’s too much emphasis on cybersecurity. I believe the economic value that OSS generates should be the primary focus.
We should have enough data by now that shows open-source is a unique investment, more like this: “… was conservatively estimated to give a 200% return on investment, in addition to creating a “thriving, mutually beneficial ecosystem” of individuals, government agencies, and private entities. Moreover, public support of OSS has been shown to lead to substantial increases in jobs in the IT sector.”
On the other hand, security is probably an easier sell
I also believe “grants” funding schemes won’t scale much; that would be too bureaucratic and slow for the dynamic open-source ecosystem.
Still, as someone who has been advocating that we should finance open-source through public money, it’s a fantastic initiative. Let’s see how it will progress.
It’s also a shame all the endorsers are men. And someone really needs to make a new cartoon: https://xkcd.com/2347/
Yes, 2x. and here are a few more characters so I can post
This is an interesting discussion. I am not sure the opening premise is precise enough to be actionable:
"Open source software is widely relied upon, but poorly supported, putting our national security at risk."
In fact there is a wide range of support levels. They vary by project purpose significantly. For example, I would argue the LINUX Kernel is nearly the opposite of “poorly supported”, as are many open source projects that fall into the category of critical infrastructure. I am not saying the myriad of open source communities cannot do better, and be more systematic, but I would argue that framing open source software as a “single thing” or category of software make it hard to know where to direct resources.
The CHAOSS Project’s (a Linux Foundation Project) Risk working group has developed a series of “minimum viable metrics” after lengthy discussion of the complex relationships between software dependencies, software vulnerabilities, and security risks. We will be summarizing our discussions and the “minimum viable metrics” related to them shortly.
The report’s call for resources may be premature, as I don’t think most OSPO’s or open source communities yet have enough information to help make generalized, strategic decisions.
Instead, from a security perspective, IMHO it may make sense to help organizations — especially those controlling critical infrastructure, or information “honey pots” — to harden their security at the network layer. All software is less vulnerable if we have sufficient network engineering resources in front of them. In my own experience I’ve secured systems that have software with unknown vulnerability levels using techniques like DMZ’s and IP address filtering.
Ultimately, I support the recommendations in the report link, but would like to see movement in the direction of precise goals like I suggest above. Surely NOT only those specific thoughts, but those of others who are experienced cybersecurity experts. Good network layer security can be deployed relatively swiftly for critical infrastructure and information “honey pots” while we figure out the best strategies for managing software risk. My metaphors are often strained, but I suggest we first build a network layer fort, and then polish our cannonballs.
I just made a rather detailed response. On the topic of OSPO’s specifically, Duane O’Brien at Indeed.com is one OSPO leader I’ve spoken with who has a clear vision of the questions OSPO’s need answered in order to meet the software side of what I think is a much larger risk profile than open source software.
In fact, I might argue that in many cases proprietary software poses an even greater risk because those vulnerabilities are much easier to keep out of the public spotlight. Its kind of like (tortured simile this time) saying that your risk visibility is greater in a black box than a glass box. That doesn’t pass my personal “smell test”.
Both proprietary and open source software benefit from network layer gatekeeping. If I was spending government money right now, I would spend a lot of it there.
Wait…. What?! Have we learned nothing about how many blind spots all male panels have?! I am simultaneously shaking my head, laughing, and crying.
I don’t know, we’ve learned a few things. I’ve learned how easy it is to ask for my name to be removed.
Thanks, Alyssa. I’m not perfect and I hadn’t noticed.
The most important thing Duane is doing is getting consensus about the metrics that need to be tracked/governed.
Excluding Banks, Defense and a few other regulated industries, few companies have Chief Risk Officers. BUT, CIOs measure risk all the time.
Indeed (pun intended), Duane has done a great deal to help us sort through the sometimes confusing, overlapping, and unclear jargon that different computing specializations use to discuss software security, vulnerability, and dependency in open source.
A lot of the synthesis that CHAOSS has done, and is in the process of translating into a clear(er) articulation of the issues at hand have been through Duane, Kate Stewart, David Wheeler, and Sophia Vargas’s commitment of time, and relentless pursuit of getting these issues to the point where they are clear enough to support coherent discussion and strategery.
Since I like to repeat myself, I’ll add that we have a much greater understanding of the risks of open source than we do for proprietary software. I think when it comes to questions of securing critical infrastructure and information this discussion needs to account for the black box and the glass box. IMHO.
From the open source software security perspective, I think one of the most impactful investments is in setting up free scanning and testing services for open source. One of the earliest of these was the Coverity Open Scan tester, which is widely used by open source projects. A more recent but also very popular example is the Google OSS Fuzz project. There are services that scan for CVEs in dependencies, services that scan for ‘typical or well-known’ coding errors, and systems that look for https on web site and signatures on download files. They require the bare minimum investment and effort by the developer and yield a stream of useful test data. Any free automated service like this that is tailored to doing repetitive work for the developers will have tremendous payback. So, I recommend looking at the opportunities for helping the developers meet any emerging new standards for secure software. If you want all these projects to have SPDX files, how about an SPDX ‘generator’ or ‘checker’ like the web cookie applets everyone uses?
I am not arguing that these activities are necessarily better than others that have been suggested, but I think they are contributions that open source projects would welcome.
Vicky Risk (Product Manager at isc.org)
The same regulation that is coming out that will require SBOMs will also address proprietary software. It will require vendors that do business with the US government to notify them whenever there is a security breach. It might do more than that, but I’m not sure.
Honestly, we haven’t gotten the jargon down yet – we’re closer, but there isn’t buy-in from vendors and different communities.
A lot of really great points here. My favorite: Checklists work. In my experience most professionals responsible for deploying core infrastructure for critical (safety, national security, medicine, etc.) systems use them with profound discipline. When I worked on therapeutic medical devices, we tested every potential system state, for example. Some systems have to be as deterministic as possible, but we obviously do not need to, and cannot afford to apply that standard to all software. (And, yeah, we definitely used heavily modified open source projects).
Regarding tools, my opinion is evolving toward deploying an array of tools, and applying the same standards for all software. Here, I am thinking about these issues as requiring redundancy [simple example: 737 Max’s standard single airspeed sensor], and recognizing that the ~20 different tools we’ve looked at on the CHAOSS project to date, including the ones you mention, all have blind spots.
Pragmatic aside: It’s my personal opinion that there is a difference between solving the problem, which I would like to be the goal, and hiring a tool vendor to deflect management risk associated with software (open source and otherwise) risks. Most organizations will, I suspect, hire a vendor; I just don’t think that will ever be enough. If we also harden network infrastructure in ways I suggested in a prior response, and deploy a repertoire of tools aimed at dependency and vulnerability risks using different strategies, I think the goal of securing the world’s most critical systems is achievable within a certain tolerance for failure because, seriously, this is an arms race. We won’t win them all.
Of equal concern to the goal of securing open source software (IMHO) are little pieces of proprietary software that act as known unknowns in many cases. And what about hardware? FPGA’s are increasingly common in the field, especially at the end of the wire. Is it necessary in some cases to deploy multiple sensors from different vendors/projects?
IMHO, open source software plays a role, and we have challenges to meet. And if we meet them all, I am not convinced the overall security threat to critical systems will be solved. I am reasonably convinced they will not be.
Finally … Some questions for others ….
- Since I’m not a networking security expert, but I’ve used sophisticated networking configurations to protect critical systems over the years, I am most curious to know if that community shares my intuition/belief/experience/perspective that committing resources to hardening that layer around software is the first place dollars should flow.
- I am also curious if others view the unknowns of proprietary software as a security policy issue that governments should address in parallel with discussions about open source software.
- Finally, putting on my tinfoil hat, its no secret that government agencies have back doors built into many systems. I’m curious what the relative risk to critical infrastructure is when those back doors are hacked by bad actors. Especially in comparison to the challenges we are generally aware of.
Like most people here, I’m spending my time and energy looking at the open source software component; securing open source software. This is necessary, IMHO, but not sufficient. The cybersecurity threat matrix, even from a cursory, conceptual point of view, doesn’t put open source software at the top of my list of things to commit the most resources to right now. Some, yes. Definitely.
One of the primary reasons for these vulnerabilities is that OSS is often maintained by volunteers who do not always prioritize security
What is the responsibility of these guys if it appears that they are spreading misinformation, and OSS is often maintained by volunteers who do always prioritize security? The incentive should be done on supporting those voluteeers, instead of making an impression that they should be replaced in the name of national security. I don’t want to bring this topic, but in Belarus people are detained daily for any dress in red and white colors, and that goes in the name of national security.