After some research on the internet and read some articles/posts about directory traversal/path traversal security problem, I still don’t quite get when I need to watch out for this kind of security problem, should I always need to watch out for this kind of security problem when I am developing a back-end or only when I am doing/implementing certain things/features?
If we have already obtained access to a volume, then should directory traversal be a problem? Mind, we already restrict access to folders, however feebly with
robots.txt. With htaccess we can make folder content unlistable and
robots.txt can tell robots to keep away. Both of these come with a downside.
htaccess only works when it is completely populated into every folder. Oh, don’t worry, we don’t have to do that. The server does it for us. If an htaccess file is found in any folder, all of its child folders will get populated with that file and all future requests have to poll it!
Think of the drain on a site when this level of scrutiny needs to be applied on every request. It’s tantamount to having a regular expression engine running all the time in the background.
The cure to making folders non-traversable is to give them all an index.php file. Any server side extension can apply, not just
.php. It would be contingent on the host. A moot point. Bottom line, have an index page in every folder.
I’m still not completely clear on the question, and hope you are willing to bear with my investigation/postulation/&c. Please help us along.
Say we put an index.php file in an image folder. Well, we have struck a gold mine. With a simple script and HTML template we can render a thumbnail page and supply every thumb with a link to photo, which we can present in a template page.
That image folder you just saw, every detail came from the file header. Even the thumbnail.
Be aware of security problems, is more the concern. Most of them are at a social level, not technical. We humans are made pawns very easily.