Delve into the complexities of achieving memory safety in software development, exploring the challenges of transitioning away from non-memory safe languages and discussing strategies and methodologies for becoming memory safe.
Last week we discussed what memory safety is and looked at the impacts of using non-memory safe languages. In this second part of our two-part blog series we will offer insights into the evolving landscape of memory safety and its implications for the future. We will explore how to achieve memory safety, discuss the adoption of memory-safe languages, and highlight mechanisms for reducing memory safety issues.
Why Aren’t we Memory Safe Already?
It’s not an easy task to get rid of non-memory safe software programming which has been relied on since the 1970s and which has been used to build all the low-level functionality of the systems we use today. Many critical applications and software infrastructures rely on these non-memory safe languages. Removing them entirely means rewriting large portions of software, which at this point may not be humanly feasible.
Up until a few decades ago, almost all system software was written in C or C++, both non-memory safe languages. Their use has been greatly reduced in the past few years; thanks to Java, .NET, and other languages; however, they are still heavily relied on. According to Statista, in 2023 about 41.76% of software developers were using C++ or C. There is a reason for this continued reliance, as there are some significant advantages for using C or C++. For example, C++ is more efficient than C#: it’s very fast; has a small memory and disk footprint; is mature, predictable, and has a highly applicable platform; and it does not need additional installed components to run.
Most of the advantages of using non-memory safe languages lie in execution speed and size of executables, which is why they are still being used despite the security risks. A lower-level language will simply be faster when it’s relying on a non-memory safe language. Higher level languages such as Java, .NET, and JavaScript are interpreted languages and need a VM (Virtual Machine) to execute code, which provides a layer of abstraction and memory safety for the programmer, but which may also slow down the system.
Non-memory safe programming also allows for the precise control and predictability needed for low-level optimizations, such as kernel development and embedded systems programming.
In certain situations, it may be necessary to use a non-memory safe language. Examples of this scenario might include when working with drivers or embedded software or when interacting directly with hardware components.
Some other reasons why memory-safe languages may not be implemented include:
It’s also important to remember that memory-safe languages are not perfect and may introduce safety latencies of their own that could reduce system performance.
Becoming Memory Safe with Safe Languages
The most obvious and effective way to be memory safe when creating new programs is to exclusively use memory safe languages from the start. The White House published a report earlier this month, titled Back to the Building Blocks: A Path Toward Secure and Measurable Software. In it, the National Security Agency (NSA), pushes the use of memory safe programming languages, “The highest leverage method to reduce memory safety vulnerabilities is to secure one of the building blocks of cyberspace: the programming language.”
Some of the more popular memory-safe languages include C#, Java, Typescript, Ruby, Python, Rust, Swift, Kotlin, and Go.
When working to expand or alter already established software written in a non-memory safe language, we may be able to rewrite the entire program so that it’s memory safe. This will likely take significant time and resources and will also require training developers or hiring new ones.
Programmers can write new modules for an existing code base in a memory-safe language, incrementally and gradually changing over to a completely memory-safe system. This may require building data structures to allow for data exchange between the two languages until the whole system is upgraded to be memory safe.
If the system is too complicated or widespread, is a legacy system, or is built on top of a legacy system it may not be feasible to simply rewrite the entire program. In this case, we may use a more gradual, methodical approach.
In ‘The Case for Memory Secure Roadmaps: Why Both C-Suite Executives and Technical Experts Need to Take Memory Safe Coding Seriously’, published by the United States Cybersecurity and Infrastructure Security Agency and other government agencies, they provide helpful guidance for manufacturers with steps to implement changes for eliminating memory safety vulnerabilities from their products. They recommend the following:
Becoming Memory Safe Through Other Methods
Although implementing memory-safe languages is the most obvious method, memory safety can be achieved through other methods, or a combination of methods, to create a comprehensive memory safety strategy.
These methods might include:
Programmers can also take advantage of application programming interfaces (API’s), so that a whole new language doesn’t need to be learned. In fact, a lot of Rust components come equipped with C API.
What Now?
Given the exponential growth of operating systems and the explosion of Internet of Things (IoT) devices, the imperative to explore memory-safe alternatives is becoming increasingly important. There is a global-wide push to switch to memory-safe programming languages, especially as government entities make it a priority.
In the White House report mentioned above responsibility is put on the programmers themselves, “Programmers writing lines of code do not do so without consequence; the way they do their work is of critical importance to the national interest.” This pressure will likely increase as more guidelines are presented and perhaps even regulations established to ensure forward movement for memory-safe systems.
Many are already well on their way toward achieving memory safety. Google reports that as they have reduced the amount of unsafe-memory coding in their Androids, memory safe vulnerabilities have also been reduced. They state that from 2019 to 2022 the percentage of their vulnerabilities that were caused by memory safety issues had decreased from 76% to 35%. 2022 was the first year that memory safety vulnerabilities did not represent the majority of Android’s vulnerabilities.
Other recent strategies for memory-safe systems have included:
At Buildable, we primarily use C#, Typescript, Python, and Java/Swift for Android/iOS specific cases. It is a conscious choice. We have 20 years of experience with C#, and JavaScript. As such, we have seen these languages evolve over time and appreciate the increasing standard that they impose upon programmers after each new release. We put an onus on thread-safety, memory-safety, and execution-safety to shelter the software we write from vulnerabilities that could emanate from mundane tasks.
By prioritizing memory safety in software development practices and leveraging the right tools and methodologies, organizations can mitigate the risk of memory-related vulnerabilities and build more resilient and trustworthy software ecosystems. Customers, when given a choice, will choose the safer option, propelling us further into a memory-safe future.
McMinnville Headquarters
Appointment only
620 NE 3rd Street
Suite A
McMinnville, OR 97128
Pacific City Office
Appointment only
35170 Brooten Road
Suite E
Pacific City, OR 97135
Portland Meeting Room
Appointment only
1355 NE Everett Street
Suite 100
Portland, OR 97209
Phone: (503) 468-4880
Email: connect@buildableworks.com
Talk with an expert at Buildable about your project.
Copyright © 2024 Buildable.
All Rights Reserved
Privacy Policy | Terms of Service