Ah thank you much. I now understand.

Now the question is, ‘why did it work?’ This is an *edge case* that is a necessary forensic investigation concern.

Oh. yes, because of count==2 condition since 1 only can be divided by its own without remainder, therefore, count = 1 for x value of 1.

Not the cleanest but the first code i got to function properly

def sum_of_prime_factors(n):

lst = list(range(1,n+1))

factors =

for i in lst:

if n % i == 0:

factors.append(i)

primelst =

for i in factors:

check = True

for j in range(2,i):

if i % j == 0:

check = False

if check == True:

primelst.append(i)

total = 0

for i in primelst:

total = total + i

return total - 1

print(sum_of_prime_factors(91))

def sum_of_prime_factors(n):

x = 0

for i in range(2, n + 1):

if n % i == 0:

if n == i:

return i

else:

y = sum_of_prime_factors(n // i)

if i != y:

return i + y

return y

print(sum_of_prime_factors(91))

[codebyte]

def sum_of_prime_factors(n):

primes =

for number in range(n+1):

start = 2

while number > start:

if number%start ==0:

break

start = start +1

if number == start:

if n%number ==0:

primes.append(number)

return sum(primes)

print(sum_of_prime_factors(91))

[codebyte]

Hard to explain what I did. First, I eliminating any non-prime numbers in the second for loop. If it is a prime number, check if our number (91) is divisible by this prime number.

Gives the correct answer for this,

`print(sum_of_prime_factors(96))`

which is what matters most, before any consideration is given to efficiency. When efficiency really matters you’ll have by then explored *optimization*, which approaches the concerns from several directions. For now, keep up the good work at getting your program to return trustworthy results.

Who said that perfection is the enemy of good enough?

To check if a factor is prime or not, the conventional method is to test out divisibility with all preceding numbers of the factor. This has O(n^2) complexity.

In my method, for each factor i, I’ve collected all previously known(less than i) prime factors of n into a list and only checked divisibility against all elements of the list.

This drastically reduces the complexity to O(n*p(n)) where p(n) is the number of prime factors of n

Big-O complexity seeks out the worst case, not the best, meaning `p(n)`

is treated as `n`

despite the optimization so O(N) is still O(n^2).

How is p(n) approximated as O(n)?

Consider the examples:

30 = 2 x 3 x 5

p(30) = 3

2310 = 2x3x5x7x11

p(2310) = 5

As n gets larger, p(n) << n

U might say,(Im not 100% sure) that p(n) has theta(log n) complexity,

giving an overall complexity of **Theta(n log n)**

It is easy to argue that it is not `O(n^2)`

, but that is the worst case when `p(n)`

us treated as `n`

. One won’t dispute that the optimized code is closer to `O(n log n)`

, so that is acceptable, and perhaps in this case it *is* the worst case. You have a point. We could use another opinion, to be fair.

```
def sum_of_prime_factors(n):
def prime_finder(n):
prime = []
for a in range(2, n+1):
if 0 not in [a%b for b in range(2, a)]:
prime.append(a)
return prime
sum_prime = 0
primes = prime_finder(n)
for p in primes:
prod = n
if prod%p == 0:
prod = n/p
sum_prime += p
while True:
if prod%p == 0:
prod = prod/p
else:
break
return sum_prime
print(sum_of_prime_factors(60))
```