inside you are two systems. system 1 bumbles around, doing some tasks automatically, and heuristically generating feelings — including value-laden ones — about things. "this looks good! that looks evil!"
system 2 is where the explicit reasoning happens — or tries to happen, difficult as it is. if you're a rationalist, then system 2 is where you generally try to do your important consequentialist decisions.
when determining what system to use — and how to use it — in various situations, it can be tempting to tend to prioritize system 2. there are good reasons to rely on system 1's "common sense" judgment, for example to avoid being convinced by reasonable-sounding bullshit. another reason to do it, however, is to avoid being convinced by reasonable-sounding correct things you don't want to know. for example, if demons are trying to blackmail you even with correct reasoning. it's hard to precommit to not succumb to blackmail, because we're only humans. and it's even harder to implement general correct decision theory in system 2; not just because implementing formal software in system 2 is generally unreliable, but also because we might not actually know for sure what the correct decision theory is.
so, one solution could be to approach novel ideas by just kinda bumbling around evaluating things with system 1 "common sense", outright reject blackmail-shaped things without system-2-thinking about them too much, and then start up your system 2 when you need to solve specific problems that seem reasonably benign — including high-level ones. system 2 can also be of good values when deciding what to train system 1 on; you want to keep your heuristics and social influence and such reasonably hygienic and aligned to good and useful things. that may be what it takes for rationalists to win.
this isn't a strong recommendation or even a claim that i intend to systematically do that myself, just a possibility to give reasonable consideration.