HC24 (2012)

August 27-29, 2012 at Flint Center, Cupertino, CA

Combined PDF’s

Tutorials, August 27, 2012

Tutorial 1: (The Evolution of) Mobile SoC Programming, Neil Trevett (Host), Khronos

Mobile devices are driving a revolution in how computing is accessible by all. Users can now carry devices in their pockets that grant unfettered access to and facile manipulation of their data, redefining how people work and relate to each other. The blunt instrument of Moore’s law performance scaling has certainly contributed to this trend. However, the demands of graceful industrial design (thermals, PCB area), friction-free user interfaces (smooth performance), and long-term wire-free usage (battery life and unconstrained wireless connectivity) has forced architects of “System-on-Chip” (SoC) designs to take more nuanced approaches to utilizing increasing transistor density. This includes the integration of many purpose optimized logic blocks, programmable and non-programmable, which, when properly orchestrated with other (traditional) SoC components like CPUs and GPUs, deliver low-power implementations of common use-cases for these devices.

The challenge for Operating System vendors and developers is how to enable this orchestration efficiently for the burgeoning market of mobile and tablet-optimized applications (which is nearing millions of applications across all platforms). For optimal power at best quality, it is not unusual for the app developer to either implicitly or explicitly program upwards of a dozen pieces of chip IP, each with it’s own programming model and performance/power characteristics. How are the OS vendors enabling this vibrant and growing applications ecosystem in the presence of such daunting programming problems?

This tutorial will review the current landscape of mobile SoCs and their future direction (a hint: we’re integrating more functionality than ever on a single die). We will describe how the complexity of developing for these SoCs is tamed by today’s APIs and what challenges remain. We will also discuss how future use-cases will demand additional capabilities be enabled, impacting everything from chip design to OS to high-level APIs.

Tutorial 2: Die Stacking

2.5D/3D die stacking increases aggregate inter-chip bandwidth and shrinks board footprint while reducing I/O latency and energy consumption. By integrating in one package multiple tightly-coupled semiconductor dice – each possibly in a process optimized for power, performance and costs for a particular function – this technology gives system designers additional options to partition and scale solutions efficiently. Die stacking has already transformed the design of high-end CMOS image sensors, and it promises to also enhance FPGA, graphics and mobile applications. In Part 1 of this tutorial we will examine the key enabling technologies such as silicon interposer, TSV, micro-bump and assembly integration. In Part 2 we will cover the design considerations & trade-offs of 2.5D/3D in CAD, ESD and architecture. Part 3 will showcase how the technology is used in systems and applications for memory integration, optics integration and monolithic die partitioning.

Conference Day 1, August 28, 2012

Session 1: Microprocessors

Session 2: Fabrics and Interconnects

Keynote 1: The Surround Computing Era

Session 3: Many Core and GPU

Session 4: Multimedia and Imaging

Session 5: Integration

Keynote 2: The Future of Wireless Networking

Conference Day 2, August 29, 2012

Session 6: Technology and Scalability

Session 7: SoC

Keynote 3: Cloud Transforms IT, Big Data Transforms Business

Session 8: Data Center Chips

Session 9: Big Iron