DescriptionIn this presentation, we will cover the use of HPC technologies for Big Data problems. We will share the design philosophy behind the architecture of Cray analytic systems and discuss results from benchmarks on three kinds of workloads: graph analytics, matrix factorization and deep learning training. These results will demonstrate how the combination of the network interconnect and application of HPC best practices enables the ability to process 1000x bigger graph datasets up to 100x faster than competing tools on commodity hardware, provides a 2-26x speed-up on matrix factorization workloads compared to cloud-friendly Apache Spark and promises over 90% scaling efficiency on deep learning workloads (i.e. potentially reducing training times from days to hours). We will end by presenting success stories of organizations that have leveraged HPC thinking both in the enterprise and scientific computing sectors.