FAQ: RDDs with PySpark - Motivate our Lazy Friends with Actions

This community-built FAQ covers the “Motivate our Lazy Friends with Actions” exercise from the lesson “RDDs with PySpark”.

Paths and Courses
This exercise can be found in the following Codecademy content:

[Beta] Introduction to Big Data with PySpark

FAQs on the exercise Motivate our Lazy Friends with Actions

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!
You can also find further discussion and get answers to your questions over in #get-help.

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head to #get-help and #community:tips-and-resources. If you are wanting feedback or inspiration for a project, check out #project.

Looking for motivation to keep learning? Join our wider discussions in #community

Learn more about how to use this guide.

Found a bug? Report it online, or post in #community:Codecademy-Bug-Reporting

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

I guess the answer for the first question under ‘Actions’ should be:

Running the cell gives the desired output. However, this is not accepted by the program.
Your code did not produce an output. Make sure your code has been saved in the designated code cell and that it produces an output when run.

i ve got the same problem, tried everything nothing worked

Ok so i figured it out, every time you execute any cell you have to save the notebook by clicking ‘Save and Checkpoint’ in the upper left

How can you apply logic that has to be sequential when spark evaluates the code at a later point in time?

The lesson states:

rdd = spark.SparkContent.parallelize([1,2,3,4,5])rdd.map(lambda x: x+1).filter(lambda x: x>3)

Instead of following the order that we called the transformations, Spark might load the values greater than 3 into memory first and perform the map function last. This swap will save memory and time because Spark loaded fewer data points and mapped the lambda to fewer elements.

What if I my logic requires that x + 1 takes places so that x = 2 is not filtered out in the next step? Is there any way to ensure that the lazy execution takes this type of sequence into consideration?

Wrong definition in RDDS WITH PYSPARK 4/8

it says:
The key thing about actions is that, like transformations, they take an RDD as input, but they will always output a value instead of a new RDD.

however the syntax for this is stated as:
rdd.reduce(lambda x,y: x+y)
Therefore it is a method on the object and not a function which take the object (RDD) as an input.