The Prism Technology Platform is at the core of CriticalBlue's business, and the deployment of our technology is central to ongoing customer relationships. Data generated by our technology during projects is used to analyze customer software applications and rapidly identify the root causes of poor performance. This in turn allows us to implement the necessary improvements.
Our technology platform is built on Dynamic Binary Level Analysis which supports the dynamic instrumentation of compiled software running on most hardware platforms. This allows the capture of performance data down to the detail of individual instruction execution. This opens up a range of analysis possibilities for our engineers to take advantage of, including:
A key advantage of this approach is that components of our technology can be deployed with the customer to provide a basis for ongoing support and future development once the initial project is delivered.
In our Insights section, you can find articles that show the level of analysis and optimization available when using our technology platform.
Throughout the collaborative engagement phase of a project, we use the Prism Technology Platform in house to develop new analysis, visualization or runtime techniques to meet the needs of the particular project. Once we have established how the technology fits with the customer needs, it is rapidly packaged and deployed with appropriate training and support.
Since the Prism technology is so flexible, the resulting deployment to the customer can take many different forms, including tools, compilers, and runtime libraries. Some examples of this approach are given in the following sections.
Several analysis capabilities from the Prism Technology Platform were used to allow the customer to visualize memory access errors found by dynamic instrumentation of their software. These included buffer overrun, stack corruption, and data races on multicore platforms. Since the analysis is based upon a trace of runtime behavior, memory accesses leading up to an error can be tracked back through the source code within the IDE, making it a lot easier to find the original mistake in the code.
Leveraging the detailed trace capture features of our technology platform and our ability to analyze the resulting data in the context of the underlying hardware, the customer can analyze how well their software fits on a particular processor architecture based on dynamic instrumentation of the runtime behavior. This further supports processor architecture specific optimization by highlighting:
These issues are presented in easy to use visualizations and tables, and in all cases link through to the relevant locations in the source code.
In the case of complex, long running firmware applications the memory access pattern gradually evolves over time, making static optimization difficult. This can result in degraded performance as data structures become large and fragmented, resulting in a high cache miss rate. A lightweight version of Prism dynamic instrumentation technology forms the basis of a software library which detects data hotspots and remaps the data in memory at runtime to ensure good cache behavior.
This approach supports processor design teams evaluating architectural decisions against the requirements of software which the processor will be expected to run. Dynamic instrumentation is used to efficiently capture runtime traces of applications on the current processor generation which can then be played back through the customer's simulator platform for the next generation under development.
Additionally IDE extensions allow visualization and rapid modeling of alternative cache designs and an in depth analysis of common instruction sequences for which the pipeline should be optimized.
Many applications (Webkit, JIT code) feature a fragmented execution path through their instructions due to a high number of branches. This can lead to high instruction cache miss rates which are difficult to fix by simply refactoring the code.
Dynamic profiling is used at runtime to identify blocks of code which are frequently executed and then to rewrite the instruction memory so that these blocks are adjacent to one another. This improves the instruction cache hit rate, and since the process continues in the background over the lifetime of the application, the instruction layout is continually altered to track the changing execution pattern of the application code.
By using memory access information captured during dynamic tracing by the Prism Technology Platform and existing IDE refactoring capabilities, it is possible to identify data structures which are causing inefficient data cache use due to poor spatial locality. The process of refactoring the code is semi-automated to make the final data layout closer to optimal for the target platform.
Translation of the contents of a binary executable file from one instruction set to another supports the migration of software from one platform to another when there is no access to the original source code. The optimizing translation process also identifies potential code optimizations to improve performance on the new target including rescheduling and mapping to SIMD instructions where possible.