However, modern CPUs, which are orders of magnitude faster, use more electricity (sometimes many times more) when occupied with calculations as compared to their use when idle. Therefore, allowing your CPU to remain idle while waiting for input will save electricity (unless you are using thermostat-controlled electrical heating at the time), and it can save on fan noise and/or component wear. This means people participating in distributed computation efforts are likely to do so only if they feel their contribution is worth those costs.
It might seem to make more sense to run background tasks on servers at data centres, especially ones that obtain their power responsibly. In those places, hopefully nobody has to sit next to the fan and hardware is probably replaced on a regular schedule. However, this decision is probably best left to the administrators of the physical hardware. Many commercial servers are "virtual"---they look like separate machines but are actually sharing resources on a single machine---and providers of virtual servers tend to discourage sustained high CPU use even at low priority, because the virtualisation software might not be able to combine priority levels across virtual machines.
If you want to participate in a long-running project without it generating fan noise then the best option is probably to run it locally but throttle it to consume only a small percentage of idle CPU cycles (which would probably give computing power similar to a full-on CPU from the old days); however this will make work units take longer, and it will still take some extra power and warm the hardware, so to be truly "free" it'll have to be done in a place that's electrically heated anyway (or from power that would otherwise go to waste) and on hardware whose lifetime you don't mind reducing.
On the other hand, if new hardware is purchased with a warranty and needs to be stress-tested before the warranty expires, and there is a choice of running an otherwise-useless load test or participating in a distributed computation, and the distributed computation makes a good enough test, then CPU cycles that would have been consumed by the load test are "free" for the distributed computation as long as one or more of its work units can be completed within the duration of the test. (If using BOINC in a temporary directory, you might want to set --abort_jobs_on_exit so the project `knows' not to wait for the deadline before reassigning any work you've interrupted.)
A final consideration is the project's system requirements, some of which might not be made explicit in the project's documentation. If your client completely fails to return results, or if it does return results but they're mostly rejected (check the status), it might be a case of "the developers didn't expect your OS version/CPU type/etc" and your contributions can't help until that's fixed. Also, some projects now prefer programmable GPUs; if you don't have a suitable GPU for these projects then your CPU-only contribution might be dwarfed by those who do, representing diminishing returns in terms of electricity consumed per computational unit. This does not apply to CPU-only projects such as the current set of energy and medical research simulations on IBM's World Community Grid, but some of these do still have unstated system requirements (e.g. "Outsmart Ebola Together" has been known to declare invalid all results computed on old versions of Mac OS X; this can be worked around by running it on a suitably-configured GNU/Linux in VirtualBox).