Customers engage with CriticalBlue to improve the performance and responsiveness of their mobile and embedded computing products. Using our expertise in multicore parallelism and low-level data flow, CriticalBlue analyzes and optimizes application and underlying platform software to get the most out of all available hardware resources.

Because of the rapid pace of change in mobile and embedded computing, customers rely on CriticalBlue not just to solve their immediate product performance problems, but also to raise their development team's optimization capabilities through lasting partnerships.

The Mobile and Embedded Performance Gap

Mobile and embedded computing platforms are quickly growing in complexity, and a significant performance gap has developed between what a typical application developer can accomplish and what could be achieved by optimally exploiting the underlying platform hardware. This gap may manifest itself as a combination of poor usability, delayed introduction of new features, sluggish responsiveness, and excessive power drain.

Mobile computing platforms, often Android based, are expected to run many different applications, and application developer productivity is crucial to the platform's success. Performance and responsiveness of applications is sacrificed for quick to market development targeting a broad variety of platform hardware.

CriticalBlue works with platform providers to bring new phones and tablets to market or to extend the lifetime of existing products as measured by representative benchmark scores and the usability of key applications. CriticalBlue does this, not by modifying individual applications, but rather by optimizing core libraries, virtual machines, and operating system services, leveraging the unique characteristics of the underlying hardware architecture and resources.

CriticalBlue also works with leading-edge application developers on applications requiring maximum performance and responsiveness, the platform web browser being the most common example. CriticalBlue analyzes application operations on target platforms and recommends changes to the application, leveraging multicore parallelism, data and instruction refactoring, and resource utilization, both to broadly enhance application performance and to optimize performance on specific architectures.

Embedded computing products, often Linux based, center around one application or group of services. Often the entire product stack, from application software through platform hardware, can be considered for optimization to meet throughput, capacity, and efficiency goals.

CriticalBlue works with the application development team, reaching deeply across layers of application, operating system, drivers, and hardware, to tune application performance while respecting product maintainability. A typical product might be a wireless or network infrastructure component needing to maximize the number of simultaneous users or packet streams per device.

CriticalBlue also often advises platform teams on optimizing next generation hardware for their key application software loads.

Collaborative Engagement Process

From experience successfully eliminating the performance gap on a variety of customer projects, CriticalBlue has developed a core customer engagement process which is repeatable, manages risk early and effectively, and gets performance results quickly.

While some customers choose to hand off projects to our development teams, other customers prefer that CriticalBlue's developers work hand in hand with their own development teams, so our process is adaptable to best fit each working relationship. In general, our core process is strongly biased towards multiple short development iterations per project and is built around many of the tenets of agile development. Each iteration delivers working software measured against performance metrics to maximize rate of progress towards key results and to facilitate frequent customer collaboration throughout the project.

The core process has three main parts:

  • Realistic Assessment: the initial project analysis required to establish shared, measurable project goals, identify and manage risk, and plan against the key timeframes for your project.
  • Achieving Results: a set of working iterations designed to meet your target performance results, anticipating course corrections as the project evolves.
  • Acquired Capabilities: productized best practices and lessons, integrated into your team's workflow through Prism tools and techniques, which raise your development team's capabilities and effectiveness

Realistic Assessment

The first step in the core process is to understand where you are and where you need to go. One of the most important steps is to determine a set of target metrics which establish a performance baseline and which accurately gauge progress towards competitive product or application performance. Time and resource constraints are also identified which help set realistic targets.

CriticalBlue analyzes your current platforms and applications to determine the validity and completeness of the project metrics, as well as to determine the sensitivity of those metrics to different parts of the system (applications, libraries, virtual machine, hardware resources, etc.). This analysis helps determine where to focus resources and estimate the degree of effort and risk to attain each target metric.

This is a highly collaborative activity as CriticalBlue seeks to understand what the context and critical performance goals of the project really are, to identify any interim milestone dates which will impact scheduling, and to determine what existing work both CriticalBlue and you already bring to the problem.

Since the working relationship between CriticalBlue experts and customer development teams varies by engagement, this is also the time to tailor our core process to adapt cleanly with your preferred development flow and support systems.

An initial results plan is developed around a set of iterations, staged to unlock metrics improvements in an orderly fashion and to handle the higher risk parts of the development as early as possible. The assessment phase concludes with a sign off of the shared project metrics, the collaborative working process, and the results plan.

Delivering Results

Iterating frequently keeps CriticalBlue locked in to your needs, and the transparency of metrics reporting engenders close communication and continuous progress towards achieving the key results. CriticalBlue likes to establish a regular, usually weekly, rhythm of working software builds and metrics assessments, and in coordination with this, insists on at least weekly results reviews with key stakeholders.

Each iteration is designed to move closer to the final solution. This encourages us to be ruthlessly practical in how we stage optimizations. CriticalBlue adapts its own development style to interface cleanly with your existing development and validation tools and flows. Based around the results metrics, CriticalBlue develops test suites which measure performance metrics and stress optimization and resource limits. Your use case scenarios and relevant functional and integration tests are also very important to us, if we have access to them.

Close cooperation on achieving results and on delivering interim working software encourages customers to test drive internally between iterations. We find that customers who take the time to do this will often reassess and modify performance goals and platform assumptions during the project. Rather than push back, we have come to expect it. Working together with short iterations gives us the ability to course correct incrementally, rather than locking in rigid goals and large, painful changes. When necessary, we refactor iteration sequences as priorities change, or we add additional iterations to capture existing gains while insulating high risk, high reward stretch goals from the rest of the project.

In later iterations of your project, as we close in on all project targets, our attention shifts increasingly to stress testing to ensure that performance metrics are met for extreme and corner cases. This is also a time for your development and quality assurance teams to increase their own testing and inspection of code deliveries and to call out any integration issues and areas for improvement. After the last iteration, we enter a support phase where we will make changes to enhance the quality of the code as it relates to maintaining reliable performance targets within the scope of the project.

Acquired Capabilities

One of CriticalBlue's key assets is our expert knowledge and experience, which we embed in our Prism tools and techniques, and which give us deep insight into platform operation across multiple layers of software and hardware. During results iterations, we often enhance Prism capabilities to better analyze, visualize, and/or optimize your particular performance issues.

We love to teach, and our work doesn't stop once performance metrics are achieved. With most customer projects, we will perform our own internal iterations to ensure productization of newly added Prism features, and we train your development teams in how to use the Prism technology to enhance your existing development flow.

We think this might be some of the best training your development teams receive, because the examples used are built on your own software base. Your development teams can apply the same multicore optimization techniques CriticalBlue used together with you on the current project to your own future projects, and the deep insight into performance gaps and how to bridge them can be added to your development team's permanently acquired capabilities.

Customers like you often begin a relationship with CriticalBlue on a single hot-button project, taking advantage of one aspect of our technology and expertise for an initial solution. While installing these new solution capabilities in the development team, it is fairly common for us to be retained for additional projects leveraging our assets in other areas. These longer term partnerships solve a succession of your critical software performance problems while continuously enhancing your development teams' optimization capabilities.