Censor Dispenser Part Three Help

I’m having a hard time cleaning up the output for part three of the Censor Dispenser project (proprietary_terms). This is the code I’m using:

def censor_propietary_terms(email):
    for term in proprietary_terms:
        email = email.replace(term, 'X')
    return email

If I print censor_propietary_terms(email_two) I get the following output:

Good Morning, Board of Investors,

Lots of updates this week. The Xs have been working better than we could have ever expected. Our initial internal data dumps have been completed and we have proceeded with the plan to connect the system to the internet and wow! The results are mind blowing.

She is learning faster than ever. Her learning rate now that X has access to the world wide web has increased exponentially, far faster than we had though the Xs were capable of.

Not only that, but we have configured X X to allow for communication between the system and our team of researcXs. That's how we know X considers Xself to be a X! We asked!

How cool is that? We didn't expect a personality to develop this early on in the process but it seems like a rudimentary X is starting to form. This is a major step in the process, as having a X and X will allow X to see the problems the world is facing and make hard but necessary decisions for the betterment of the planet.

We are a-buzz down in the lab with excitement over these developments and we hope that the investors share our enthusiasm.

Till next month,
Francine, Head Scientist

I would really appreciate some help with understanding what’s going on here and how I can return the expected output for this part of the project.

What does clean up mean?

What I mean is to return the expected output which should have censored ‘She’ and ‘Her’, should not have censored ‘researcXs’, and should have completely censored ‘Xself’.

some words should be censored if the bad word is in it, but not others? seems inconsistent.

You may want to start paying attention to word boundaries and consider words on their own instead of running replace on the whole thing.

Got it, I’ll attempt to implement your suggestion. Thank you.

Please could you provide a code snippet which will point me in the right direction?

Don’t know what direction that is.

If you want words you’d be looking to split the text wouldn’t you? Specifically you’d want to split at boundaries of alphabetical and non-alphabetical characters, so you could use str.isalpha to detect that. (Yes, that means making a loop, stepping through the input, testing each character and deciding whether or not to make a cut there)

def censor_propietary_terms(email):
    new_email = email
    for term in proprietary_terms:
        new_email = new_email.replace(term, 'X')
    return email

why is here a copy of the email the key to the solution?

If I apply replace on email, it won’t replace all terms of proprietary_terms in email.

Still struggling to get the logic here.

Thanks for feedback!

Hey there!

I’m having some difficulties with this part as well.

I’m looking to build a functions that grows with the problems, since all the parts share the same root, I.e. take two inputs break them a part to easily comparable entities check then redact, then reassemble.

Right now i’m having an issue with handling newlines and new-new lines ‘\n\n’. How should I go about iterating and splitting these?

Here’s my rough code.

new_email = []
for i in range(1):
  new_email.append(email_two.strip('\n').split(' '))
  temp_email = []
  for strings in new_email:
    if '\n\n' in strings:
      temp_email.append(string.split('\n\n'))
      for word in temp_email:
                        new_email.append(word)                

Thank you! Any clues would be appreciated.

Hey Thomas,

I’m also strugling with this step so my reaction are purely questions, not answers.
in line 3, why do you add the .strip('\n')? the \n isn’t on the edges of the letter.

And maybe you’d like to first split in '\n' and after that in ' '? but how will you be able to compile the email neatly back with the right enters?
hope this helps you.

Yeah I took another look at it and I figured it out. It isn’t necessary to remove the new lines for this part of the exercise(via split).

Actually the answer is very very simple, its a for loop running through the list of NoNo words that simply replaces them with the words’ length in asterisks.

Now, removing the words before and after the removed word is where i’m currently working. This seems to be where splitting the email up is necessary. Then re-assemble it via “”.join( ) I think, though i’m probably wrong.

hey!

i took a long time and a lot of banging my head against the wall to split the text via an .isalpha() method because when i tried it this simple way ‘Her’ and ‘herself’ weren’t found and ‘her’ in ‘researcher’ was found and translated. Were you able to account for this?

greetings!

I haven’t heard of the .isalpha() function!

I did experience that issue with part 3, I figured it out just using a very simple for loop. It ran into the same issue as yours does, ie. danger vs dangerous, though if you solve the last part of the exercise it shouldn’t matter since everything surrounding it will be redacted.

This is where I stand

def Blackout(word):
    blackout = len(word) * '*'
    return blackout

def Scrub_superior(email, illegal_verbage):
  
  for word in illegal_verbage:
    if word in email:
      email = email.replace(word, Blackout(word))
      return email

the .isalpha() function will return True if the input is a letter from the alphabeth. i used it to divide the letters into a list consisting of words and not-words so to speak. it worked really well in the end but then i found out that the solution of codecadamy was way simpler (but also way less effective.)

That’s a great solution, i’ll try to fudge about with it!

Hi,
Is anyone can take a look to my solution please?
Is it the right output? Or I don’t understand anything?

Output
Good Morning, Board of Investors,

Lots of updates this week. The <censored>s have been working better than we could have ever expected. Our initial internal data dumps have been completed and we have proceeded with the plan to connect the system to the internet and wow! The results are mind blowing.

She is learning faster than ever. Her learning rate now that <censored> has access to the world wide web has increased exponentially, far faster than we had though the <censored>s were capable of. 

Not only that, but we have configured <censored> <censored> to allow for communication between the system and our team of researc<censored>s. That's how we know <censored> considers <censored>self to be a <censored>! We asked!

How cool is that? We didn't expect a personality to develop this early on in the process but it seems like a rudimentary <censored> is starting to form. This is a major step in the process, as having a <censored> and <censored> will allow <censored> to see the problems the world is facing and make hard but necessary decisions for the betterment of the planet.

We are a-buzz down in the lab with excitement over these developments and we hope that the investors share our enthusiasm.

Till next month,
Francine, Head Scientist