Code Question on Python Project Thread Shed

Hello! I am new to coding and am currently doing the Computer Science career path. I am on the project “Thread Shed” and my output is not showing up the way I have wanted it too. I have watched the walkthrough on the project and even though it seems like I am doing the exact same thing, it always comes out different. My current code through step 7 is:

daily_sales_replaced = daily_sales.replace(';,;', '.')
daily_transactions = daily_sales_replaced.split(',')
daily_transactions_split = []

for sale in daily_transactions:


transactions_clean = []

for transaction in daily_transactions_split:
  transaction_clean = []
  for data_point in transaction:
    transactions_clean.append(data_point.strip(" "))

It is supposed to come out as ex: [‘Myrtle Morris \n’, ‘$22.66’, ‘green&white&blue’, ‘09/15/17’]
but it is coming out as, ex: ‘Myrtle Morris \n’, , ‘$22’, , ‘66’, , ‘green&white&blue’, , ‘09/15/17’,
I don’t really have much idea what is going on here, so any help solving this problem would be appreciated! Thanks!

Hello @willwithawill, welcome to the forums! Could you post a link to this exercise? What may be the problem is that you’re splitting the string at every , but you’re replacing the ;,; with a full stop (.). Should this be a comma?
I hope this helps!
P.S. I’ve moved this thread into Get Help >> Python.

Edit: This does not apply to the error in any way, but I just thought I’d let you know, if you didn’t already, that the .split() method’s default parameter is whitespace, so if you wanted to strip whitespace from a string, you could just have object.split(). The way you do it is perfectly valid, I just thought I’d say it.

Yes sorry! The project I am referencing here is this one -
In the project, you first change the item seperating the different data points in a transaction eg: total cost, customer name, date of sale
And then, I am splitting the string at every comma, because that is the difference between two different transactions. I did try changing the ‘.’ to a ‘+’ instead, but to no avail. It seems like the problem is that I am creating new lists inside the original list, but not adding the transactions into those lists. I have yet to find a solution, so any ideas would be much appreciated. Thanks for the suggestions already though!

1 Like

Little update! So I have looked over my code again and noticed I miswrote the name of the variable I was appending data to at the bottom of the code. I rewrote it and changed

transactions_clean.append(data_point.strip(" "))


transaction_clean.append(data_point.strip(" "))

Now it is removing the whitespace from all the items in the list and adding them back to the main list. Unfortunately now it is repeating that 4 times and I am stuck with 4 iterations of the same transactions.

Hello! From the old code provided, I managed to create a very ungainly solution, where I loop through the transactions_clean list, and append anything that isn’t [] to a new list. This is not an ideal solution at all, but if you wanted to try it, it might help you to see something in your old code that was wrong (other than a mis-typed variable). What do you mean it repeats 4 times? When I run the code, I get this:

[['Myrtle Morris \n', '$22', '66', 'green&white&blue', '09/15/17']]

Again, another in-perfect solution would be to do get the first element of the list transactions_clean (transactions_clean[0]. This removes the nested list.
Sorry if I’ve misunderstood something.

Sorry for not writing that very clear. The for loop (I assume) is adding the list of data points in the transaction to the main list of transactions four times. eg: [‘Herbert Tran’, ‘$7.29’, ‘\nwhite&blue’, ‘09/15/17’], [‘Herbert Tran’, ‘$7.29’, ‘\nwhite&blue’, ‘09/15/17’], [‘Herbert Tran’, ‘$7.29’, ‘\nwhite&blue’, ‘09/15/17’], [‘Herbert Tran’, ‘$7.29’, ‘\nwhite&blue’, ‘09/15/17’]. It is however, stripping out the whitespace.

Hello @willwithawill. The reason, just from a guess, is probably because you’re looping through each

which has four data points.

1 Like

Oh yeah. That would be it! Thanks for all the help!