Render state. You are not in a full control of it. DX runtime may change it without your concern. In particular, this happens under DX10+ when you bind a texture resource that is also one of the render targets. Debug runtime will notify you in the log but the regular one will just do it silently. And then you wonder, where did the texture go? Unsurprisingly, PIX, the hammer of DX frame profiling, is not able to handle this behavior correctly, so debugging one of these little bugs may cost you a really long headache. In DX11 we can see a new flag that allows binding a depth-stencil texture as read-only, allowing to sample from it. As a result, you have to split the rendering paths: copy the depth for DX10, and use the flag for DX11.
In contrast, OpenGL gives you an undefined behaviour whenever you want to read and write at the same time. While it seems suspicious at first, in practice you don't sample from the texture being rendered to, so your program works as expected, and no state is corrupted. Moreover, your GL program doesn't need a new flag, or a depth texture duplicate: all you need is to disable depth/stencil writes, and you can read them. Concluding, while DX creates and fixes the problems of its own with new versions, OpenGL just works as expected.
Documentation. The official source of knowledge about DX is MSDN. Unfortunately, you are not able to track all changes that go there. I don't see any revision history. My co-worker was following the CHM documentation bundled with our DirectX SDK, and according to it, CopyResource() can not be used if one of the surfaces is multi-sampled. He ended up copying a surface using a full-screen quad with a designated shader... And only then I discovered that the online version is different: for DX 10.1 the function actually copies multi-sampled surfaces too.
Another example is D3D11_RASTERIZER_DESC structure. There is a MultisampleEnable member, which (surprise!) affects only line rendering under DX10+, while affecting all MSAA rendering under DX10 and below. Yes, I know there is also AntialiasedLineEnable flag, but how does this make it any less confusing?
The situation gets worse as you dig deeper. As an example, there is a texture object in HLSL. According to MSDN, you have to explicitly specify the number of samples in the template. Though, I'm not sure, maybe the page is fixed while I'm typing this. Anyway, in practice, under DX10.1+ you can skip it. That's where you end up scavenging all little details of these presentations, scanning the forums, and trying to guess logically. The DX knowledge is like a secret cave with treasures, where some companies (Epic,Crytek) know them better then others.
OpenGL, on the other hand, provides a strictly versioned document. You can download any version of it, see the changes highlighted, and find everything you are looking for. You don't need to scavenge the forums: if something works differently from the specification, it's most likely a bug, and not a feature.
Sloppiness. You can do many things incorrectly, and DX runtime will still try its best to let your application work. For example, you can sample from a multi-sampled texture bound as a regular one. DX will automatically resolve the pixel before sampling, if it's possible. Or you can assign a float3 to a float in HLSL, and it will still work (can probably be fixed by a strict flag in HLSL compiler). Such robust behaviour is very welcome on the end-user side, but developers need to be sure the code is valid. I would prefer it to crash hard on the first error encountered, or at least return some error code (hello, OpenGL). I understand that, again, one can use the debug runtime, see the error log, and figure this out. But the truth is, most development goes with a regular runtime, because the debug one is damn slow. And even on that you would have to trace through the suspicious instruction to see the new error in the log - that's not how exceptions should be handled.
All in all, DirectX makes an impression of being made by amateurs, who got the power to talk to hardware developers. It's not developer friendly, it's not blazing fast, it's not something to compete with OpenGL. I admit that's a bit of an emotional over-statement, and one could expect something like that from me. I will continue learning DX technology, and I hope to discover some real gems there, if they exist at all.