Justin, let me start by saying that I am not a CPU designer, but I used to know some of them and I would ask them the same kinds of questions. Sadly the answer is still, "it depends".
Let's consider each core as a Processing Unit (PU). The 2 single-core processors have 2 PUs and the 1 dual-core processor has 2 PUs. It would seem to be even, but already there is a disparity. There is always a limit to the number of transistors that can be written on a single die for a single chip. When you increase the number of cores on a die, there is usually a trade-off of less compute power for each core. Using the same fabrication process, it's easier to put more on a single core processor -- more pipelines, more L2, more ALUs, etc. -- than on a dual core.
Now, let's imagine that all the PUs have the same compute power. The next disparity appears in the speed at which the two processors can communicate with each other to coordinate work. The 2 cores on the same die can be interconnected by a high-speed bus that is also on-die. This bus can easily be as wide as you please and will run at the core clock speed. The 2 separate processors, however, have to have an external interconnect bus which will always be slower, narrower, and more latent. As a result, for tasks that require coordination between PUs, the dual core has a huge advantage.
Now, let's get back to the real world. With 2 processors, each can be more powerful than the cores on a dual-core processor. As a result, any tasks that can be divided between the 2 processors which requires little or no coordination will be completed faster by the 2 processors. With tasks that make heavy use of the PU interconnect bus, the dual core has a big advantage and will be faster when the PUs are all of equal power and sometimes when they are not. This is why the answer is: "it depends".
Sometimes the CPU maker puts out single-core chips with a dual-core die in them and one of the cores is disabled. There is a situation where the single core variant has the same compute power as the dual-core version of the chip. However, it would be unlikely that someone would buy 2 of these single-core chips where the one core is disabled and run them in parallel. Much easier to get a single dual core.
Finally, the science of breaking compute work down into separate threads and how to dispatch threads in a multi-PU environment is still evolving. Does a deeper pipeline result in better real-world performance? Remember that Intel stopped using their Hyperthreading technology in the Core 2 design, but now it's back in Core i7. That just shows me that they are still learning and testing new ideas.