Nothing to worry about, lad. You just forgot Rule Nineteen. Submit?
Rule Nineteen? What the hell is Rule Nineteen? Yes, yes, submit, submit!
Remember To Never Forget Rule One. And always ask yourself how come it was created in the first place, eh?— Terry Pratchett, from Thief of Time (1)
If you were to ask around of several developers what critical principles have
the most impact on their coding practices, style, etc., I suspect that you'd get
almost as many different answers as different people you asked. The variations may
be mostly in priorities, and those may be shaped or set by other people (management),
or by other various external
forces or events (deadlines), but even then,
I'd almost be willing to bet that there'd be significant differences, if only because
they were shaped by different experiences.
My current priorities, shaped by my experiences, started as a set of half-serious comments to my boss many, many moons ago. Over the subsequent years, and across the various positions I've held and across the handful of companies where I held them, I invariably came back to them over and over again. These days, though I probably don't pay as much attention to them as I should, they still end up shaping a lot of my development practices. For whatever they might be worth, and in the hopes that they'll help someone in some (probably small) way, here they are...
In retrospect:
These probably sound pretty cynical... Nevertheless, I'd stand by the principles they relate to, even if the phrasing is... overly harsh...
Rule #1: Never Trust User Input
There are any number of examples of where this principle got ignored, or at least not followed through adequately. In combination with Rule #2, below, insufficient attention to this principle can (eventually will?) result in some potentially significant longer-term issues, like:
- Buffer Overflow vulnerabilities (less a concern, arguably, in Python, but even so...). Variations of this sort of vulnerability range all the way up to concerns like Heartbleed;
- SQL Injection (SQLi) vulnerabilties; and
- Cross-Site Scripting (XSS) vulnerabilties
Solution: If data is created directly by a user, validate it,validate it all, and raise errors if anything looks at all fishy.
Rule #2: All Input Is Generated By Users
A bit of a semantics quibble, maybe, but absolutely true from at least one perspective: Even data generated by some completely autonomous process — sensors attached to very simple processors, running very simple programs to report that data out still have significant human involvement. It's just buried a few layers deep, under whatever code is running to perform those basic tasks. Even if you know who wrote the code (even if you wrote the code), there's always a possibility that something got missed, that a bug will throw bad data at your code. That's at the begnign end of the spectrum. At the other end, you'll have people and/or programs intentionally trying to break stuff.
Somewhere on that spectrum, or at some line drawn in whatever processes and logic a codebase is concerned with, there has to be some measure of trust, though. Otherwise, there's no point in even making the effort, yes? The alternative is writing all the code needed for a project. All of it, down to the OS. And who has time for that?
Solution: In addition to making sure that Rule #1 has been applied, make conscious decisions about where your boundaries of trust are, know where those boundaries are, and don't be afraid to revisit or review them as circumstances warrant.
Rule #3: If You're Gonna Use a Framework, Use the Framework
This one is a relatively recent addition, and was formed after experience with two separate projects. In both cases, development was undertaken after selection of an application framework, one with an associated ORM. In both cases, a substantial portion of the final codebase was built out without leveraging the facilities provided by their frameworks.
To be fair, there may have been good reasons for the design decisions involved. Or, at a minimum, the reasons may have seemed like they were valid when the decision was made. I suspect, though, that any time this sort of decision is implemented, it will raise complications. I know that I've seen variations of the following between those two projects:
- Processes and logic that were well outside the functional expectations that
would be
normal
for the framework (e.g., not having models or databases that conform to the ORM); - Difficulty in implementing automated/unit testing of components that don't follow the framework's standards, probably because of the additional time required; to
- Difficulty in adequately documenting components that don't follow the framework's standards, also probably because of the additional time required;
How Well Do I Adhere To These...?
I'd like to think that I keep these in mind as often as they apply, but in the real world, with real deadlines and other real-world constraints, actually abiding by them is sometimes just not practical. That said, even if they aren't actually enforced, keeping them in mind, and being able to point out the risks that will arise as a result of setting them aside in favor of making that deadline, or meeting whatever other factors are shaping the time available is, I think, beneficial.
(1) If you haven't already surmised from before, yes, I'm an avid Terry Pratchett fan...
+++Divide By Cucumber Error. Please Reinstall Universe And Reboot+++
I can not agree more for rule #3. Actually I think that using the frameworks correctly you can often avoid lots of issues caused by users input i.e. you have rule #1 partially covered.
ReplyDeleteI suspect that (obviously) depends on the framework, but a thorough framework will have facilities for such things, and a well-designed one will make it as painless as possible.
Delete