Modern Data Engineering with Apache Spark

By Scott Haines

Release : 2022-03-22

Genre : Programming, Books, Computers & Internet, Databases, Science & Nature, Mathematics

Kind : ebook

(0 ratings)
Leverage Apache Spark within a modern data engineering ecosystem. This hands-on guide will teach you how to write fully functional applications, follow industry best practices, and learn the rationale behind these decisions. With Apache Spark as the foundation, you will follow a step-by-step journey beginning with the basics of data ingestion, processing, and transformation, and ending up with an entire local data platform running Apache Spark, Apache Zeppelin, Apache Kafka, Redis, MySQL, Minio (S3), and Apache Airflow.
Apache Spark applications solve a wide range of data problems from traditional data loading and processing to rich SQL-based analysis as well as complex machine learning workloads and even near real-time processing of streaming data. Spark fits well as a central foundation for any data engineering workload. This book will teach you to write interactive Spark applications using Apache Zeppelin notebooks, write and compile reusable applications andmodules, and fully test both batch and streaming. You will also learn to containerize your applications using Docker and run and deploy your Spark applications using a variety of tools such as Apache Airflow, Docker and Kubernetes.
​Reading this book will empower you to take advantage of Apache Spark to optimize your data pipelines and teach you to craft modular and testable Spark applications. You will create and deploy mission-critical streaming spark applications in a low-stress environment that paves the way for your own path to production.

What You Will LearnSimplify data transformation with Spark Pipelines and Spark SQL
Bridge data engineering with machine learningArchitect modular data pipeline applications
Build reusable application components and librariesContainerize your Spark applications for consistency and reliabilityUse Docker and Kubernetes to deploy your Spark applications
Speed up application experimentation using Apache Zeppelin and DockerUnderstand serializable structured data and data contractsHarness effective strategies for optimizing data in your data lakesBuild end-to-end Spark structured streaming applications using Redis and Apache KafkaEmbrace testing for your batch and streaming applicationsDeploy and monitor your Spark applications

Modern Data Engineering with Apache Spark

By Scott Haines

Release : 2022-03-22

Genre : Programming, Books, Computers & Internet, Databases, Science & Nature, Mathematics

Kind : ebook

(0 ratings)
Leverage Apache Spark within a modern data engineering ecosystem. This hands-on guide will teach you how to write fully functional applications, follow industry best practices, and learn the rationale behind these decisions. With Apache Spark as the foundation, you will follow a step-by-step journey beginning with the basics of data ingestion, processing, and transformation, and ending up with an entire local data platform running Apache Spark, Apache Zeppelin, Apache Kafka, Redis, MySQL, Minio (S3), and Apache Airflow.
Apache Spark applications solve a wide range of data problems from traditional data loading and processing to rich SQL-based analysis as well as complex machine learning workloads and even near real-time processing of streaming data. Spark fits well as a central foundation for any data engineering workload. This book will teach you to write interactive Spark applications using Apache Zeppelin notebooks, write and compile reusable applications andmodules, and fully test both batch and streaming. You will also learn to containerize your applications using Docker and run and deploy your Spark applications using a variety of tools such as Apache Airflow, Docker and Kubernetes.
​Reading this book will empower you to take advantage of Apache Spark to optimize your data pipelines and teach you to craft modular and testable Spark applications. You will create and deploy mission-critical streaming spark applications in a low-stress environment that paves the way for your own path to production.

What You Will LearnSimplify data transformation with Spark Pipelines and Spark SQL
Bridge data engineering with machine learningArchitect modular data pipeline applications
Build reusable application components and librariesContainerize your Spark applications for consistency and reliabilityUse Docker and Kubernetes to deploy your Spark applications
Speed up application experimentation using Apache Zeppelin and DockerUnderstand serializable structured data and data contractsHarness effective strategies for optimizing data in your data lakesBuild end-to-end Spark structured streaming applications using Redis and Apache KafkaEmbrace testing for your batch and streaming applicationsDeploy and monitor your Spark applications

advertisement