Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from Embedded Advisor
Finding ways to improve productivity in unlikely places
Today, many industries face time-to-market challenges. The semiconductor design industry is no different. More formally known as the Electronic Design Automation (EDA) industry, design houses are under tremendous pressure to meet the continuous growth in complexity of the chips that power modern electronic devices. That complexity, fueled by consumer demand for more features and performance, has made design simulation and verification even more critical to successful first-pass chip tape-outs, placing huge pressures on engineering and IT to keep chips on schedule and within budget.
"New file system technologies are optimized for flash and can accelerate product design workflows"
The challenge is that more complex simulations need to be completed in less time, and legacy external storage may be the reason for lengthy chip design cycles—the applications are starved of data. The conventional wisdom of IT is to add more compute resources and EDA tools to solve the problem, but purchasing expensive equipment and tools may not reduce the design verification process because it does not address a primary problem, storage bottlenecks. Compute bottlenecks occur at the network attached storage (NAS) filer, leaving applications starved of data and design teams unproductive. Legacy NAS storage systems were not designed for the diverse workloads found in EDA today such as complex directory structures at massive scale, metadata-heavy I/O (input-output), large and small files, as well as random and sequential access patterns. Front-end and back-end chip design processes have unique storage requirements, so combining I/O and bandwidth intensive workloads on the same storage system often results in huge bottlenecks that delay final tape-out.
Historically, scale-out NAS has been an attractive solution that kept pace with increasing performance and capacity demands for any industry. However, it comes with compromises such as high management overhead, forklift upgrades, and islands storage. In EDA, each new chip design presents an increase in the amount of storage system capacity and performance required. The number of simulations performed and the amount of data being produced today demand a radical departure from traditional storage architectures in order to maintain productivity.
Many workflows, but especially complex chip designs can benefit greatly from the performance that can be achieved with flash technology. Flash is ideal for front-end design which requires the ability to rapidly process small files. Although scale-out NAS is great for streaming large files common in back-end design, it cannot deliver small file performance at the scales required by today’s designs. EDA increasingly requires storage optimized for the entire design flow.
Software-defined storage (SDS) is becoming widely adopted to provide both small and large file performance at low latencies without the cost, complexity, and performance limitations of legacy external storage systems. According to several analysts, the global SDS market could be as large as $40B. File based SDS, worth approximately $7B. Most applications still require a file system to organize and store the data. Inefficient disk operations cost precious time, leading to idle workers and lost productivity.
A critical component to a high performance SDS solution is the underlying file system. The highest performing SDS solution is based on a parallel, distributed file system, one that dynamically and independently scales both performance and capacity and has been designed for flash technology. Designing for flash means that data is stored in the same format used by the flash device, greatly improving storage efficiency, performance, and ultimately, worker productivity. Flash memory is the key to achieving the low latency, small file performance that trading systems, databases, and EDA simulation tools rely on.
A key advantage of SDS solutions is their flexibility to run either alongside your applications sharing the same infrastructure (known as hyper converged) or run separately on dedicated hardware. In contrast, traditional NAS uses rigid configurations that run on specialized hardware and don’t scale, resulting in wasted IT resources. This does not mean that NAS systems are not useful; quite the contrary, legacy NAS devices can be repurposed as a more economical tier of storage for applications that do not require the extreme performance of flash. Inactive data can be moved from the performance (flash) tier to slower, more economical NAS for long term storage.
Productivity stems from operational efficiency. In EDA, storage solutions that deliver on-demand performance and capacity during peak simulation period scan have a tremendous impact on an organization’s ability to achieve on-time chip delivery. By avoiding rigid, hardware based storage architectures; designers can achieve breakthrough storage system performance at low latencies and a much-reduced cost. When verification and simulation comprise 60 percent of the chip design cycle, it makes sense to target these areas to improve operational efficiency. EDA organizations must ask themselves, if we could reduce this time by 30 percent to 50 percent, how much more productive could we be and what could that mean for our bottom line?
Read Also
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info