I don't know what you're basing being "reasonably sure" on, but it's a
bad idea, even if the OP thinks he wants it.
Having audited hundreds of complex workbook systems, I can say with
reasonable authority that using ISERROR() is near the top of the list
for generating erroneous results, leading to bad decisions based on
those results. I certainly wasn't able to infer as much of the OP's
intent from his single sentence as you were, but IMO having a dependent
function return a valid value when an unexpected error occurs in its
arguments is a first-class design foul-up.
For better or worse (mostly worse), XL returns #N/A as an "expected
error" for some functions, e.g., VLOOKUP. Trapping it with ISERROR()
means that if a precedent calculation returns, say, #DIV/0 due to an
invalid or unanticipated condition, that THAT error will be ignored,
and, depending on how the data is laid out, it may *not* be clearly
exposed (and you just have to read these groups to see how many people
*don't* check their data first when they get an error). The probability
of missing the error in the precedent cells approaches 1 as the amount
of data increases.
That's why I fully expect XL2007's new IFERROR() function to lead to
lowered reliability. It's great when used to flag an unexpected error.
It will lead to wrong results when used to mask them.
And, as you said, that's my opinion.