Per.11: Move computation from run time to compile time
Per.11:将计算从运行时移动到编译时
Reason(原因)
To decrease code size and run time. To avoid data races by using constants. To catch errors at compile time (and thus eliminate the need for error-handling code).
为了减少代码大小和执行时间。通过常量避免数据竞争。为了在编译时捕捉错误(同时消除错误处理代码)
Example(示例)
double square(double d) { return d*d; } static double s2 = square(2); // old-style: dynamic initialization
constexpr double ntimes(double d, int n) // assume 0 <= n
{
double m = 1;
while (n--) m *= d;
return m;
}
constexpr double s3 {ntimes(2, 3)}; // modern-style: compile-time initialization
Code like the initialization of s2 isn't uncommon, especially for initialization that's a bit more complicated than square(). However, compared to the initialization of s3 there are two problems:
向s2初始化这样的代码很常见,特别是比square稍微复杂一点初始化代码。然而,和s3的初始化相比,存在两个问题:
- we suffer the overhead of a function call at run time
- 我们需要负担执行时的函数调用所需的代价。
- s2 just might be accessed by another thread before the initialization happens.
- 在被初始化之前,s2可能被另外的线程访问。
Note: you can't have a data race on a constant.
注意:常量不会发生数据竞争。
Example(示例)
Consider a popular technique for providing a handle for storing small objects in the handle itself and larger ones on the heap.
考虑一种提供一个存储小对象于自身,存储大对象于堆的句柄。
constexpr int on_stack_max = 20;
template<typename T>
struct Scoped { // store a T in Scoped
// ...
T obj;
};template<typename T>
struct On_heap { // store a T on the free store
// ...
T* objp;
};template<typename T>
using Handle = typename std::conditional<(sizeof(T) <= on_stack_max),
Scoped<T>, // first alternative
On_heap<T> // second alternative
>::type;
void f()
{
Handle<double> v1; // the double goes on the stack
Handle<std::array<double, 200>> v2; // the array goes on the free store
// ...
}
Assume that Scoped and On_heap provide compatible user interfaces. Here we compute the optimal type to use at compile time. There are similar techniques for selecting the optimal function to call.
假设Scoped和On_head提供了兼容的用户接口。这里我们在编译时计算最优的类型。类似的技术可以用于选择最优的函数调用。
Note(注意)
The ideal is {not} to try execute everything at compile time. Obviously, most computations depend on inputs so they can't be moved to compile time, but beyond that logical constraint is the fact that complex compile-time computation can seriously increase compile times and complicate debugging. It is even possible to slow down code by compile-time computation. This is admittedly rare, but by factoring out a general computation into separate optimal sub-calculations it is possible to render the instruction cache less effective.
理想状态时{不要}试图在运行时执行每一件事。显然由于大多数计算依靠输入信息,所以无法移动到编译时计算,但是复杂的编译时计算会严重的增加编译时间并使调试复杂化。甚至可能由于引入编译时计算使代码变慢。不可否认,这种情况非常少见,但是通过将一个通常的计算强制分为独立的最优化子计算过程,有可能使指令缓存效率变低。
Enforcement(实施建议)
- Look for simple functions that might be constexpr (but are not).
- 寻找可以(但是没有)定义为constexpr的简单函数。
- Look for functions called with all constant-expression arguments.
- 寻找使用常量表达式参数调用的函数。
- Look for macros that could be constexpr.
- 寻找可以定义为constexpr。
原文链接
https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#per11-move-computation-from-run-time-to-compile-time