Integrating Apache Spark with MongoDB | Read/Write MongoDB data using PySpark
In this lecture, we’re going to learn all about how to work with MongoDB using Apache Spark(PySpark) where we will learn about how to read and write data file to MongoDB using Apache PySpark code and will use HDP Sandbox for this purpose since it will allow you to setup a single node Hadoop Cluster on your PC.
Below are the commands, data files and codes for this lecture:
1. Get the data file:
wget https://raw.githubusercontent.com/ashaypatil11/hadoop/main/movies.user
2. Use linux commands to copy the files to HDFS
hadoop fs -mkdir /user/maria_dev/mongodb
hadoop fs -copyFromLocal movies.user /user/maria_dev/mongodb/movies.user
3. Get the required PySpark scripts from GitHUB:
wget https://raw.githubusercontent.com/ashaypatil11/hadoop/main/mongo-read.py
wget https://raw.githubusercontent.com/ashaypatil11/hadoop/main/mongo-write.py
4. submit the PySpark application using below commands:
export SPARK_MAJOR_VERSION=2
spark-submit –packages org.mongodb.spark:mongo-spark-connector_2.11:2.3.2 mongo-read.py
spark-submit –packages org.mongodb.spark:mongo-spark-connector_2.11:2.3.2 mongo-write.py
————————————————————————————————————-
HDP Sandbox Installation links:
Oracle VM Virtualbox: https://download.virtualbox.org/virtualbox/6.1.32/VirtualBox-6.1.32-149290-Win.exe
HDP Sandbox link: https://archive.cloudera.com/hwx-sandbox/hdp/hdp-2.6.5/HDP_2.6.5_virtualbox_180626.ova
HDP Sandbox installation guide: https://hortonworks.com/tutorial/sandbox-deployment-and-install-guide/section/1/
Full step-by-step video: https://youtu.be/YXZLGL9vSug
Also check out similar informative videos in the field of cloud computing:
What is Big Data: https://youtu.be/-BoykjY5nKg
How Cloud Computing changed the world: https://youtu.be/lf2lQAyW2b4
What is Cloud? https://youtu.be/DeCMeA9Xm2g
Top 10 facts about Cloud Computing that will blow your mind! https://youtu.be/hmxNJEQ4XVY
Audience
This tutorial has been prepared for professionals/students aspiring to learn deep knowledge of Big Data Analytics using Apache Spark and become a Spark Developer and Data Engineer roles. In addition, it would be useful for Analytics Professionals and ETL developers as well.
Prerequisites
Before proceeding with this full course, it is good to have prior exposure to Python programming, database concepts, and any of the Linux operating system flavors.
———————————————————————————————————————–
Check out our full course topic wise playlist on some of the most popular technologies:
SQL Full Course Playlist-
https://youtube.com/playlist?list=PL6UwySlcwEYISVLQlYi3W6rGCIo9sJM0J
PYTHON Full Course Playlist-
https://youtube.com/playlist?list=PL6UwySlcwEYJgM4eUQOvR1KAWryFYcclq
Data Warehouse Playlist-
https://youtube.com/playlist?list=PL6UwySlcwEYKxi-fQHLkVYDZrJcBawZA9
Unix Shell Scripting Full Course Playlist-
https://youtube.com/playlist?list=PL6UwySlcwEYIZGsbXnUxsojD0yeUA67lb
———————————————————————————————————————–Don’t forget to like and follow us on our social media accounts:
Facebook-
https://www.facebook.com/ampcode
Instagram-
https://www.instagram.com/ampcode_tutorials/
Twitter-
https://twitter.com/ampcodetutorial
Tumblr-
ampcode.tumblr.com
———————————————————————————————————————–
Channel Description-
AmpCode provides you e-learning platform with a mission of making education accessible to every student. AmpCode will provide you tutorials, full courses of some of the best technologies in the world today. By subscribing to this channel, you will never miss out on high quality videos on trending topics in the areas of Big Data & Hadoop, DevOps, Machine Learning, Artificial Intelligence, Angular, Data Science, Apache Spark, Python, Selenium, Tableau, AWS , Digital Marketing and many more.
#pyspark #bigdata #datascience #dataanalytics #datascientist #spark #dataengineering #apachespark
Comments are closed.