Underflow before rounding occurs when the absolute value of the exact value is strictly less than \(\beta^{e_{\textit{min}}}\) (i.e. the smallest normal number).
Underflow after rounding occurs if the absolute value of the result we would compute assuming unbounded exponent (but still precision \(p\)) would be nonzero and strictly less than \(\beta^{e_{\textit{min}}}\) (i.e. the smallest normal number).
The IEEE 754 (including 2008) did not specify which of these two definitions of underflow should be used for binary. So we can get underflow on some platforms and not on others.
For decimal, it specifies underflow before rounding.
What does an underflow (regardless of type) signify? The result is a denormal and inexact. If the number is denormal and representable, then an underflow is not signaled.