This is obvious, but differences between environments cause problems. You can expect bugs to cluster around them.
Case in point: on my current project, we use a hand-maintained schema for production (and manual testing), but our unit tests go against one that is generated from the Hibernate schema. Naturally, we are seeing bugs related to differences between these, the most recent one being about the width of particular columns.
Any time you accept a difference between your development environment and your production environment, you are introducing a vector for bugs. If you use a cluster in prod, but not in development, expect bugs about clustering. If you use a different database (or schema ;), expect database related bugs. If you typically run your production app for a month without restarting, but your development environment only goes for an hour, expect bugs related to endurance.
This is not to say that having differences isn’t the right thing to do. It can be, especially if the difference is expensive to fix (e.g. giving each developer a $250,000 server cluster is probably not viable). However, you need to take care around it.
Differences in environments is one of the best arguments for a build machine (particularly an automated one). It is generally viable to have at least one environment that is “prod like”; use this for your automated tests.