I was a math major as an undergrad. I am now a Ph.D. student in philosophy in the dissertation stage. But I am also a full-time bioinformatician, or bioinformatician-in-training, or something. I work with computers a lot.
People sometimes imply that the shift from philosophy to computer stuff is a radical one. I’m not sure it is. Philosophy requires as much precision as computer work does, though what it is to be precise changes a bit with the underlying subject. And in computers, as in philosophy, foundational questions pop up frequently, and often in surprising places. (One of my tasks today required me to use hash tables in what is for me a new way. I had to turn to my trusty algorithms book to recall some of the details of hash function implementation. And understanding these details requires, I think, grappling with some fairly basic stuff about computer programs and architecture.)
The most difficult transition is very different: it’s developing the willingness to proceed without complete understanding. I work on ancient philosophy, which is full of warnings about what terrible things happen to those who pretend to knowledge without having it; in my training I am expected to be able to justify the parts of an argument down to the very basics. And in a standard undergraduate math course, you start the first week with the basic definitions and axioms, and then you build up to whatever you build up to in that course.
Richard Beals began a freshman course by claiming, among other things, that “almost understanding is the enemy of understanding.” One consequence of that slogan is that if you’re only pretty sure you can explain the steps of a proof, or if you understand it all save one tricky bit, you actually don’t understand it at all and you will fail if you give in to the temptation to move on and be satisfied with your almost-knowledge. I’m sure most of my philosophy professors would have heartily endorsed the sentiment. It’s the sort of thing Socrates could have said.
With computers, though, almost understanding is often OK. Write the thing, test it, think about it, annotate it, and move on! Google an error message and try to hack together a solution! Reuse other peoples’ code (if the license says you can)! I would love to understand everything perfectly, but it is all so new, there is so much to know, and much can be accomplished with partial understanding and a lot of care.
I’m a heavy reader of computer blogs, and I think that most professionals are in a position similar to mine. The goal seems to be to know as much as possible but also to develop the skill of being proficient in areas outside your expertise. It still makes me uncomfortable–I often want to stop for a day and bone up on whatever it is I don’t know. I can’t. In my dissertation work that would be mandatory; in my computer work it is impossible.