Does an unused member variable take up memory?2019 Community Moderator ElectionIs it safe to assert(sizeof(A) == sizeof(B)) when A and B are “the same”?What are the differences between a pointer variable and a reference variable in C++?How to determine CPU and memory consumption from inside a process?What does the explicit keyword mean?How to measure actual memory usage of an application or process?How do I discover memory usage of my application in Android?C++11 introduced a standardized memory model. What does it mean? And how is it going to affect C++ programming?Can a local variable's memory be accessed outside its scope?Creating a memory leak with JavaWhy does changing 0.1f to 0 slow down performance by 10x?C++ forcing compilation optimization for variable references to make direct
How to write a chaotic neutral protagonist and prevent my readers from thinking they are evil?
Why does a car's steering wheel get lighter with increasing speed
After Brexit, will the EU recognize British passports that are valid for more than ten years?
Why do we say 'Pairwise Disjoint', rather than 'Disjoint'?
Interpretation of linear regression interaction term plot
Too soon for a plot twist?
Is it appropriate to ask a former professor to order a library book for me through ILL?
Why restrict private health insurance?
PTIJ: Sport in the Torah
Inorganic chemistry handbook with reaction lists
What do you call someone who likes to pick fights?
How can I have x-axis ticks that show ticks scaled in powers of ten?
A vote on the Brexit backstop
Why does this boat have a landing pad? (SpaceX's GO Searcher) Any plans for propulsive capsule landings?
Is it a Cyclops number? "Nobody" knows!
Did Amazon pay $0 in taxes last year?
Is "cogitate" used appropriately in "I cogitate that success relies on hard work"?
Averaging over columns while ignoring zero entries
How to recover against Snake as a heavyweight character?
Why do we call complex numbers “numbers” but we don’t consider 2-vectors numbers?
Having the player face themselves after the mid-game
Vector-transposing function
Giving a talk in my old university, how prominently should I tell students my salary?
Precision notation for voltmeters
Does an unused member variable take up memory?
2019 Community Moderator ElectionIs it safe to assert(sizeof(A) == sizeof(B)) when A and B are “the same”?What are the differences between a pointer variable and a reference variable in C++?How to determine CPU and memory consumption from inside a process?What does the explicit keyword mean?How to measure actual memory usage of an application or process?How do I discover memory usage of my application in Android?C++11 introduced a standardized memory model. What does it mean? And how is it going to affect C++ programming?Can a local variable's memory be accessed outside its scope?Creating a memory leak with JavaWhy does changing 0.1f to 0 slow down performance by 10x?C++ forcing compilation optimization for variable references to make direct
Does initializing a member variable and not referencing/using it further take up RAM during runtime, or does the compiler simply ignore that variable?
struct Foo
int var1;
int var2;
Foo() var1 = 5; std::cout << var1;
;
In the example above, the member 'var1' gets a value which is then displayed in the console. 'Var2', however, is not used at all, and therefore writing it to memory during runtime would be a waste of resources. Does the compiler take these kinds of situations into account and simply ignore unused variables, or is the Foo object always the same size, regardless of whether its members are used?
c++ memory
|
show 1 more comment
Does initializing a member variable and not referencing/using it further take up RAM during runtime, or does the compiler simply ignore that variable?
struct Foo
int var1;
int var2;
Foo() var1 = 5; std::cout << var1;
;
In the example above, the member 'var1' gets a value which is then displayed in the console. 'Var2', however, is not used at all, and therefore writing it to memory during runtime would be a waste of resources. Does the compiler take these kinds of situations into account and simply ignore unused variables, or is the Foo object always the same size, regardless of whether its members are used?
c++ memory
7
There is a metric ton of low level driver code out there that specifically add do-nothing struct members for padding to match hardware data frame sizes and as a hack to get desired memory alignment. If a compiler started optimizing these out there would be much breakage.
– Andy Brown
18 hours ago
2
@Andy they're not really do-nothing as the address of the following data members is evaluated. This means the existence of those padding members do have an observable behavior on the program. Here,var2
doesn't.
– YSC
17 hours ago
2
I would be surprised if the compiler could optimize it away given that any compilation unit addressing such a struct might get linked to another compilation unit using the same struct and the compiler can't know if the separate compilation unit addresses the member or not.
– Galik
17 hours ago
2
@gezasizeof(Foo)
cannot decrease by definition - if you printsizeof(Foo)
it must yield8
(on common platforms). Compilers can optimize away the space used byvar2
(no matter if throughnew
or on the stack or in function calls...) in any context they find it reasonable, even without LTO or whole program optimization. Where that is not possible they won't do it, as with just about any other optimization. I believe the edit to the accepted answer makes it significantly less likely to be misled by it.
– Max Langhof
15 hours ago
@geza I think it all depends on what style of programming is used. In template-heavy code, you have so many tiny functions and tiny structs which only exist to be used once, that the trivial cases are pretty much everywhere.
– hegel5000
13 hours ago
|
show 1 more comment
Does initializing a member variable and not referencing/using it further take up RAM during runtime, or does the compiler simply ignore that variable?
struct Foo
int var1;
int var2;
Foo() var1 = 5; std::cout << var1;
;
In the example above, the member 'var1' gets a value which is then displayed in the console. 'Var2', however, is not used at all, and therefore writing it to memory during runtime would be a waste of resources. Does the compiler take these kinds of situations into account and simply ignore unused variables, or is the Foo object always the same size, regardless of whether its members are used?
c++ memory
Does initializing a member variable and not referencing/using it further take up RAM during runtime, or does the compiler simply ignore that variable?
struct Foo
int var1;
int var2;
Foo() var1 = 5; std::cout << var1;
;
In the example above, the member 'var1' gets a value which is then displayed in the console. 'Var2', however, is not used at all, and therefore writing it to memory during runtime would be a waste of resources. Does the compiler take these kinds of situations into account and simply ignore unused variables, or is the Foo object always the same size, regardless of whether its members are used?
c++ memory
c++ memory
edited 13 hours ago
Boann
37.2k1290121
37.2k1290121
asked 18 hours ago
Chriss555888Chriss555888
19427
19427
7
There is a metric ton of low level driver code out there that specifically add do-nothing struct members for padding to match hardware data frame sizes and as a hack to get desired memory alignment. If a compiler started optimizing these out there would be much breakage.
– Andy Brown
18 hours ago
2
@Andy they're not really do-nothing as the address of the following data members is evaluated. This means the existence of those padding members do have an observable behavior on the program. Here,var2
doesn't.
– YSC
17 hours ago
2
I would be surprised if the compiler could optimize it away given that any compilation unit addressing such a struct might get linked to another compilation unit using the same struct and the compiler can't know if the separate compilation unit addresses the member or not.
– Galik
17 hours ago
2
@gezasizeof(Foo)
cannot decrease by definition - if you printsizeof(Foo)
it must yield8
(on common platforms). Compilers can optimize away the space used byvar2
(no matter if throughnew
or on the stack or in function calls...) in any context they find it reasonable, even without LTO or whole program optimization. Where that is not possible they won't do it, as with just about any other optimization. I believe the edit to the accepted answer makes it significantly less likely to be misled by it.
– Max Langhof
15 hours ago
@geza I think it all depends on what style of programming is used. In template-heavy code, you have so many tiny functions and tiny structs which only exist to be used once, that the trivial cases are pretty much everywhere.
– hegel5000
13 hours ago
|
show 1 more comment
7
There is a metric ton of low level driver code out there that specifically add do-nothing struct members for padding to match hardware data frame sizes and as a hack to get desired memory alignment. If a compiler started optimizing these out there would be much breakage.
– Andy Brown
18 hours ago
2
@Andy they're not really do-nothing as the address of the following data members is evaluated. This means the existence of those padding members do have an observable behavior on the program. Here,var2
doesn't.
– YSC
17 hours ago
2
I would be surprised if the compiler could optimize it away given that any compilation unit addressing such a struct might get linked to another compilation unit using the same struct and the compiler can't know if the separate compilation unit addresses the member or not.
– Galik
17 hours ago
2
@gezasizeof(Foo)
cannot decrease by definition - if you printsizeof(Foo)
it must yield8
(on common platforms). Compilers can optimize away the space used byvar2
(no matter if throughnew
or on the stack or in function calls...) in any context they find it reasonable, even without LTO or whole program optimization. Where that is not possible they won't do it, as with just about any other optimization. I believe the edit to the accepted answer makes it significantly less likely to be misled by it.
– Max Langhof
15 hours ago
@geza I think it all depends on what style of programming is used. In template-heavy code, you have so many tiny functions and tiny structs which only exist to be used once, that the trivial cases are pretty much everywhere.
– hegel5000
13 hours ago
7
7
There is a metric ton of low level driver code out there that specifically add do-nothing struct members for padding to match hardware data frame sizes and as a hack to get desired memory alignment. If a compiler started optimizing these out there would be much breakage.
– Andy Brown
18 hours ago
There is a metric ton of low level driver code out there that specifically add do-nothing struct members for padding to match hardware data frame sizes and as a hack to get desired memory alignment. If a compiler started optimizing these out there would be much breakage.
– Andy Brown
18 hours ago
2
2
@Andy they're not really do-nothing as the address of the following data members is evaluated. This means the existence of those padding members do have an observable behavior on the program. Here,
var2
doesn't.– YSC
17 hours ago
@Andy they're not really do-nothing as the address of the following data members is evaluated. This means the existence of those padding members do have an observable behavior on the program. Here,
var2
doesn't.– YSC
17 hours ago
2
2
I would be surprised if the compiler could optimize it away given that any compilation unit addressing such a struct might get linked to another compilation unit using the same struct and the compiler can't know if the separate compilation unit addresses the member or not.
– Galik
17 hours ago
I would be surprised if the compiler could optimize it away given that any compilation unit addressing such a struct might get linked to another compilation unit using the same struct and the compiler can't know if the separate compilation unit addresses the member or not.
– Galik
17 hours ago
2
2
@geza
sizeof(Foo)
cannot decrease by definition - if you print sizeof(Foo)
it must yield 8
(on common platforms). Compilers can optimize away the space used by var2
(no matter if through new
or on the stack or in function calls...) in any context they find it reasonable, even without LTO or whole program optimization. Where that is not possible they won't do it, as with just about any other optimization. I believe the edit to the accepted answer makes it significantly less likely to be misled by it.– Max Langhof
15 hours ago
@geza
sizeof(Foo)
cannot decrease by definition - if you print sizeof(Foo)
it must yield 8
(on common platforms). Compilers can optimize away the space used by var2
(no matter if through new
or on the stack or in function calls...) in any context they find it reasonable, even without LTO or whole program optimization. Where that is not possible they won't do it, as with just about any other optimization. I believe the edit to the accepted answer makes it significantly less likely to be misled by it.– Max Langhof
15 hours ago
@geza I think it all depends on what style of programming is used. In template-heavy code, you have so many tiny functions and tiny structs which only exist to be used once, that the trivial cases are pretty much everywhere.
– hegel5000
13 hours ago
@geza I think it all depends on what style of programming is used. In template-heavy code, you have so many tiny functions and tiny structs which only exist to be used once, that the trivial cases are pretty much everywhere.
– hegel5000
13 hours ago
|
show 1 more comment
6 Answers
6
active
oldest
votes
If the observable behaviour of your program doesn't depend on that unused data-member existence1, the compiler is allowed to optimized it away. This is the so called as-if rule2. And compilers became astonishingly good at it. Look
#include <iostream>
struct Foo1
int var1 = 5;
Foo1() std::cout << var1;
;
struct Foo2
int var1 = 5;
int var2;
Foo2() std::cout << var1;
;
void f1()
(void) Foo1;
void f2()
(void) Foo2;
If we ask gcc to compile this translation unit, it outputs:
f1():
mov esi, 5
mov edi, OFFSET FLAT:_ZSt4cout
jmp std::basic_ostream<char, std::char_traits<char> >::operator<<(int)
f2():
jmp f1()
f2
is the same as f1
, and no memory is ever used to hold an actual Foo2::var2
. Clang does something similar.
1) Reading the comments, I'm feeling the need to expand a bit on that matter. There are a lot of operations on an object containing an "unused" data member which would have an observable effect on the program. If such an operation is performed or if the compiler cannot prove one is performed, that "unused" data member is part of the observable behaviour of your program. Including, but not limited to:
- taking the size of a type of object (
sizeof(Foo)
), - taking the address of a data member declared after the "unused" one,
- copying the object with a function like
memcpy
, - manipulating the representation of the object (like with
memcmp
), - qualifying an object as volatile,
etc.
2)
The semantic descriptions in this document define a parameterized nondeterministic abstract machine. This document places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below.
13
The code you describe is actually pretty trivial, and the compiler has enough visibility to prove that the only behaviour needed is call anoperator<<()
with parameter5
in bothf1()
andf2()
. And it is not only optimising out an unused member of thestruct
s. It is optimising the structs themselves out of existence. Provide a case where it can't optimise out the structs (e.g. create one, and pass its address to a function defined in another source file) and it is a fair bet the unused members will not be optimised out of existence.
– Peter
17 hours ago
1
But doesn't this optimization may lead to invalid memory access? Assume structures that has some zeroes at end of structure in order to make a uniform structure for supporting different data types(likesockaddr
). If it remove that unused part, it may lead to access violation.
– Afshin
17 hours ago
7
@Peter There are basically no structs in generated code anyway. As in, there is no difference in any generated code between two adjacentint
s and astruct X int a, b;
instance. Calling this "optimizing out the struct" is essentially incorrect because this is not an optimization, it's part of transforming stuff to machine code. You may be able to see remnants of your data structures in the generated machine code, but those data structures don't exist there any longer. Also see my answer.
– Max Langhof
15 hours ago
1
@Afshin The compiler is only allowed to do these optimizations if it can prove that it doesn't cause problems. If you think of a situation where it causes problems, the compiler is not allowed to do it there. It also seems like you're alluding to type punning, which is Undefined Behavior according to the standard (in which case the compiler can do whatever the hell it wants).
– Max Langhof
15 hours ago
3
@geza It doesn't make much sense to think of compilers "removing" member variables in the first place. They already turn member variables into some machine code representation, and if they can prove that one of those is not needed in the current context, they can eliminate it there. Of course the compiler won't edit your source files or pretend that your source files looked differently or anything of that nature, so it's not "removing the member" in the narrow sense. A better question would be "do unused members always cause overhead?", which can be answered with "no".
– Max Langhof
15 hours ago
|
show 13 more comments
It's important to realize that the code the compiler produces has no actual knowledge of your data structures (because such a thing doesn't exist on assembly level), and neither does the optimizer. The compiler only produces code for each function, not data structures.
Ok, it also writes constant data sections and such.
Based on that, we can already say that the optimizer won't "remove" or "eliminate" members, because it doesn't output data structures. It outputs code, which may or may not use the members, and among its goals is saving memory or cycles by eliminating pointless uses (i.e. writes/reads) of the members.
The gist of it is that "if the compiler can prove within the scope of a function (including functions that were inlined into it) that the unused member makes no difference for how the function operates (and what it returns) then chances are good that the presence of the member causes no overhead".
As you make the interactions of a function with the outside world more complicated/unclear to the compiler (take/return more complex data structures, e.g. a std::vector<Foo>
, hide the definition of a function in a different compilation unit, forbid/disincentivize inlining etc.), it becomes more and more likely that the compiler cannot prove that the unused member has no effect.
There are no hard rules here because it all depends on the optimizations the compiler makes, but as long as you do trivial things (such as shown in YSC's answer) it's very likely that no overhead will be present, whereas doing complicated things (e.g. returning a std::vector<Foo>
from a function too large for inlining) will probably incur the overhead.
To illustrate the point, consider this example:
struct Foo
int var1 = 3;
int var2 = 4;
int var3 = 5;
;
int test()
Foo foo;
std::array<char, sizeof(Foo)> arr;
std::memcpy(&arr, &foo, sizeof(Foo));
return arr[0] + arr[4];
We do non-trivial things here (take addresses, inspect and add bytes from the byte representation) and yet the optimizer can figure out that the result is always the same on this platform:
test(): # @test()
mov eax, 7
ret
Not only did the members of Foo
not occupy any memory, a Foo
didn't even come into existence! If there are other usages that can't be optimized then e.g. sizeof(Foo)
might matter - but only for that segment of code! If all usages could be optimized like this then the existence of e.g. var3
does not influence the generated code. But even if it is used somewhere else, test()
would remain optimized!
In short: Each usage of Foo
is optimized independently. Some may use more memory because of an unneeded member, some may not. Consult your compiler manual for more details.
add a comment |
The compiler is will only optimise away an unused member variable (especially a public one) if it can prove that removing the variable has no side effects and that no part of the program depends on the size of Foo
being the same.
I don't think any current compiler performs such optimisations unless the structure isn't really being used at all. Some compilers may at least warn about unused private variables but not usually for public ones.
1
And yet it does: godbolt.org/z/UJKguS + no compiler would warn for an unused data member.
– YSC
18 hours ago
@YSC clang++ does warn about unused data members and variables.
– Maxim Egorushkin
18 hours ago
2
@YSC I think that's a slightly different situation, its optimised the structure away completely and just prints 5 directly
– Alan Birtles
18 hours ago
4
@AlanBirtles I don't see how it is different. The compiler optimized everything from the object that has no effect on the observable behavior of the program. So your first sentence "the compiler is very unlikely to optimize awau an unused member variable" is wrong.
– YSC
17 hours ago
1
@YSC in real code where the structure is actually being used rather than just constructed for its side effects its probably more unlikely it would be optimised away
– Alan Birtles
17 hours ago
|
show 12 more comments
In general, you have to assume that you get what you have asked for, i.e., the "unused" member variables are there.
Since in your example both members are public
, the compiler cannot know if some code (particularly from other tranlation units = other *.cpp files, which are compiled separately and then linked) would access the "unused" member.
The answer of YSC gives a very simple example, where the class type is only used as a variable of automatric storage duration and where no pointer to that variable is taken. There, the compiler can inline all the code and can then eliminate all the dead code.
If you have interfaces between functions defined in different translation units, typically the compiler does not know anything. The interfaces follow typically some predefined ABI (like that) such that different object files can be linked together without any problems. Typically ABIs do not make a difference if a member is used or not. So, in such cases the second member has to be physically in the memory (unless eliminated out later by the linker).
And as long as you are within the boundaries of the language, you cannot observe that any eliminiation happens. If you call sizeof(Foo)
, you will get 2*sizeof(int)
. If you create an array of Foo
s, the distance between the beginnings of two consecutive objects of Foo
is always sizeof(Foo)
bytes.
Your type is a standard layout type, which means that you can also access on members based on compile-time computed offsets (cf. the offsetof
macro). Moreover, you can inspect the byte-by-byte representation of the object by copying onto an array of char
using std::memcpy
. In all these cases, the second member can be observed to be there.
Comments are not for extended discussion; this conversation has been moved to chat.
– Cody Gray♦
10 hours ago
+1: only aggressive whole-program optimization could possibly adjust data layout (including compile-time sizes and offsets) for cases where a local struct object isn't optimized away entirely, .gcc -fwhole-program -O3 *.c
could in theory do it, but in practice probably won't. (e.g. in case the program makes some assumptions about what exact valuesizeof()
has on this target, and because it's a really complicated optimization that programmers should do by hand if they want it.)
– Peter Cordes
5 hours ago
add a comment |
It's dependent on your compiler and its optimization level.
In gcc, if you specify -O
, it will turn on the following optimization flags:
-fauto-inc-dec
-fbranch-count-reg
-fcombine-stack-adjustments
-fcompare-elim
-fcprop-registers
-fdce
-fdefer-pop
...
-fdce
stands for Dead Code Elimination.
You can use __attribute__((used))
to prevent gcc eliminating an unused variable with static storage:
This attribute, attached to a variable with static storage, means that
the variable must be emitted even if it appears that the variable is
not referenced.
When applied to a static data member of a C++ class template, the
attribute also means that the member is instantiated if the class
itself is instantiated.
New contributor
That's for static data members, not unused per-instance members (which don't get optimized away unless the whole object does). But yes, I guess that does count. BTW, eliminating unused static variables isn't dead code elimination, unless GCC bends the term.
– Peter Cordes
5 hours ago
add a comment |
The examples provided by other answers to this question which elide var2
are based on a single optimization technique: constant propagation, and subsequent elision of the whole structure (not the elision of just var2
). This is the simple case, and optimizing compilers do implement it.
For unmanaged C/C++ codes the answer is that the compiler will in general not elide var2
. As far as I know there is no support for such a C/C++ struct transformation in debugging information, and if the struct is accessible as a variable in a debugger then var2
cannot be elided. As far as I know no current C/C++ compiler can specialize functions according to elision of var2
, so if the struct is passed to or returned from a non-inlined function then var2
cannot be elided.
For managed languages such as C#/Java with a JIT compiler the compiler might be able to safely elide var2
because it can precisely track if it is being used and whether it escapes to unmanaged code. The physical size of the struct in managed languages can be different from its size reported to the programmer.
Year 2019 C/C++ compilers cannot elide var2
from the struct unless the whole struct variable is elided. For interesting cases of elision of var2
from the struct, the answer is: No.
Some future C/C++ compilers will be able to elide var2
from the struct, and the ecosystem built around the compilers will need to adapt to process elision information generated by compilers.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55060820%2fdoes-an-unused-member-variable-take-up-memory%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
If the observable behaviour of your program doesn't depend on that unused data-member existence1, the compiler is allowed to optimized it away. This is the so called as-if rule2. And compilers became astonishingly good at it. Look
#include <iostream>
struct Foo1
int var1 = 5;
Foo1() std::cout << var1;
;
struct Foo2
int var1 = 5;
int var2;
Foo2() std::cout << var1;
;
void f1()
(void) Foo1;
void f2()
(void) Foo2;
If we ask gcc to compile this translation unit, it outputs:
f1():
mov esi, 5
mov edi, OFFSET FLAT:_ZSt4cout
jmp std::basic_ostream<char, std::char_traits<char> >::operator<<(int)
f2():
jmp f1()
f2
is the same as f1
, and no memory is ever used to hold an actual Foo2::var2
. Clang does something similar.
1) Reading the comments, I'm feeling the need to expand a bit on that matter. There are a lot of operations on an object containing an "unused" data member which would have an observable effect on the program. If such an operation is performed or if the compiler cannot prove one is performed, that "unused" data member is part of the observable behaviour of your program. Including, but not limited to:
- taking the size of a type of object (
sizeof(Foo)
), - taking the address of a data member declared after the "unused" one,
- copying the object with a function like
memcpy
, - manipulating the representation of the object (like with
memcmp
), - qualifying an object as volatile,
etc.
2)
The semantic descriptions in this document define a parameterized nondeterministic abstract machine. This document places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below.
13
The code you describe is actually pretty trivial, and the compiler has enough visibility to prove that the only behaviour needed is call anoperator<<()
with parameter5
in bothf1()
andf2()
. And it is not only optimising out an unused member of thestruct
s. It is optimising the structs themselves out of existence. Provide a case where it can't optimise out the structs (e.g. create one, and pass its address to a function defined in another source file) and it is a fair bet the unused members will not be optimised out of existence.
– Peter
17 hours ago
1
But doesn't this optimization may lead to invalid memory access? Assume structures that has some zeroes at end of structure in order to make a uniform structure for supporting different data types(likesockaddr
). If it remove that unused part, it may lead to access violation.
– Afshin
17 hours ago
7
@Peter There are basically no structs in generated code anyway. As in, there is no difference in any generated code between two adjacentint
s and astruct X int a, b;
instance. Calling this "optimizing out the struct" is essentially incorrect because this is not an optimization, it's part of transforming stuff to machine code. You may be able to see remnants of your data structures in the generated machine code, but those data structures don't exist there any longer. Also see my answer.
– Max Langhof
15 hours ago
1
@Afshin The compiler is only allowed to do these optimizations if it can prove that it doesn't cause problems. If you think of a situation where it causes problems, the compiler is not allowed to do it there. It also seems like you're alluding to type punning, which is Undefined Behavior according to the standard (in which case the compiler can do whatever the hell it wants).
– Max Langhof
15 hours ago
3
@geza It doesn't make much sense to think of compilers "removing" member variables in the first place. They already turn member variables into some machine code representation, and if they can prove that one of those is not needed in the current context, they can eliminate it there. Of course the compiler won't edit your source files or pretend that your source files looked differently or anything of that nature, so it's not "removing the member" in the narrow sense. A better question would be "do unused members always cause overhead?", which can be answered with "no".
– Max Langhof
15 hours ago
|
show 13 more comments
If the observable behaviour of your program doesn't depend on that unused data-member existence1, the compiler is allowed to optimized it away. This is the so called as-if rule2. And compilers became astonishingly good at it. Look
#include <iostream>
struct Foo1
int var1 = 5;
Foo1() std::cout << var1;
;
struct Foo2
int var1 = 5;
int var2;
Foo2() std::cout << var1;
;
void f1()
(void) Foo1;
void f2()
(void) Foo2;
If we ask gcc to compile this translation unit, it outputs:
f1():
mov esi, 5
mov edi, OFFSET FLAT:_ZSt4cout
jmp std::basic_ostream<char, std::char_traits<char> >::operator<<(int)
f2():
jmp f1()
f2
is the same as f1
, and no memory is ever used to hold an actual Foo2::var2
. Clang does something similar.
1) Reading the comments, I'm feeling the need to expand a bit on that matter. There are a lot of operations on an object containing an "unused" data member which would have an observable effect on the program. If such an operation is performed or if the compiler cannot prove one is performed, that "unused" data member is part of the observable behaviour of your program. Including, but not limited to:
- taking the size of a type of object (
sizeof(Foo)
), - taking the address of a data member declared after the "unused" one,
- copying the object with a function like
memcpy
, - manipulating the representation of the object (like with
memcmp
), - qualifying an object as volatile,
etc.
2)
The semantic descriptions in this document define a parameterized nondeterministic abstract machine. This document places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below.
13
The code you describe is actually pretty trivial, and the compiler has enough visibility to prove that the only behaviour needed is call anoperator<<()
with parameter5
in bothf1()
andf2()
. And it is not only optimising out an unused member of thestruct
s. It is optimising the structs themselves out of existence. Provide a case where it can't optimise out the structs (e.g. create one, and pass its address to a function defined in another source file) and it is a fair bet the unused members will not be optimised out of existence.
– Peter
17 hours ago
1
But doesn't this optimization may lead to invalid memory access? Assume structures that has some zeroes at end of structure in order to make a uniform structure for supporting different data types(likesockaddr
). If it remove that unused part, it may lead to access violation.
– Afshin
17 hours ago
7
@Peter There are basically no structs in generated code anyway. As in, there is no difference in any generated code between two adjacentint
s and astruct X int a, b;
instance. Calling this "optimizing out the struct" is essentially incorrect because this is not an optimization, it's part of transforming stuff to machine code. You may be able to see remnants of your data structures in the generated machine code, but those data structures don't exist there any longer. Also see my answer.
– Max Langhof
15 hours ago
1
@Afshin The compiler is only allowed to do these optimizations if it can prove that it doesn't cause problems. If you think of a situation where it causes problems, the compiler is not allowed to do it there. It also seems like you're alluding to type punning, which is Undefined Behavior according to the standard (in which case the compiler can do whatever the hell it wants).
– Max Langhof
15 hours ago
3
@geza It doesn't make much sense to think of compilers "removing" member variables in the first place. They already turn member variables into some machine code representation, and if they can prove that one of those is not needed in the current context, they can eliminate it there. Of course the compiler won't edit your source files or pretend that your source files looked differently or anything of that nature, so it's not "removing the member" in the narrow sense. A better question would be "do unused members always cause overhead?", which can be answered with "no".
– Max Langhof
15 hours ago
|
show 13 more comments
If the observable behaviour of your program doesn't depend on that unused data-member existence1, the compiler is allowed to optimized it away. This is the so called as-if rule2. And compilers became astonishingly good at it. Look
#include <iostream>
struct Foo1
int var1 = 5;
Foo1() std::cout << var1;
;
struct Foo2
int var1 = 5;
int var2;
Foo2() std::cout << var1;
;
void f1()
(void) Foo1;
void f2()
(void) Foo2;
If we ask gcc to compile this translation unit, it outputs:
f1():
mov esi, 5
mov edi, OFFSET FLAT:_ZSt4cout
jmp std::basic_ostream<char, std::char_traits<char> >::operator<<(int)
f2():
jmp f1()
f2
is the same as f1
, and no memory is ever used to hold an actual Foo2::var2
. Clang does something similar.
1) Reading the comments, I'm feeling the need to expand a bit on that matter. There are a lot of operations on an object containing an "unused" data member which would have an observable effect on the program. If such an operation is performed or if the compiler cannot prove one is performed, that "unused" data member is part of the observable behaviour of your program. Including, but not limited to:
- taking the size of a type of object (
sizeof(Foo)
), - taking the address of a data member declared after the "unused" one,
- copying the object with a function like
memcpy
, - manipulating the representation of the object (like with
memcmp
), - qualifying an object as volatile,
etc.
2)
The semantic descriptions in this document define a parameterized nondeterministic abstract machine. This document places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below.
If the observable behaviour of your program doesn't depend on that unused data-member existence1, the compiler is allowed to optimized it away. This is the so called as-if rule2. And compilers became astonishingly good at it. Look
#include <iostream>
struct Foo1
int var1 = 5;
Foo1() std::cout << var1;
;
struct Foo2
int var1 = 5;
int var2;
Foo2() std::cout << var1;
;
void f1()
(void) Foo1;
void f2()
(void) Foo2;
If we ask gcc to compile this translation unit, it outputs:
f1():
mov esi, 5
mov edi, OFFSET FLAT:_ZSt4cout
jmp std::basic_ostream<char, std::char_traits<char> >::operator<<(int)
f2():
jmp f1()
f2
is the same as f1
, and no memory is ever used to hold an actual Foo2::var2
. Clang does something similar.
1) Reading the comments, I'm feeling the need to expand a bit on that matter. There are a lot of operations on an object containing an "unused" data member which would have an observable effect on the program. If such an operation is performed or if the compiler cannot prove one is performed, that "unused" data member is part of the observable behaviour of your program. Including, but not limited to:
- taking the size of a type of object (
sizeof(Foo)
), - taking the address of a data member declared after the "unused" one,
- copying the object with a function like
memcpy
, - manipulating the representation of the object (like with
memcmp
), - qualifying an object as volatile,
etc.
2)
The semantic descriptions in this document define a parameterized nondeterministic abstract machine. This document places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below.
edited 14 hours ago
answered 18 hours ago
YSCYSC
24.8k557112
24.8k557112
13
The code you describe is actually pretty trivial, and the compiler has enough visibility to prove that the only behaviour needed is call anoperator<<()
with parameter5
in bothf1()
andf2()
. And it is not only optimising out an unused member of thestruct
s. It is optimising the structs themselves out of existence. Provide a case where it can't optimise out the structs (e.g. create one, and pass its address to a function defined in another source file) and it is a fair bet the unused members will not be optimised out of existence.
– Peter
17 hours ago
1
But doesn't this optimization may lead to invalid memory access? Assume structures that has some zeroes at end of structure in order to make a uniform structure for supporting different data types(likesockaddr
). If it remove that unused part, it may lead to access violation.
– Afshin
17 hours ago
7
@Peter There are basically no structs in generated code anyway. As in, there is no difference in any generated code between two adjacentint
s and astruct X int a, b;
instance. Calling this "optimizing out the struct" is essentially incorrect because this is not an optimization, it's part of transforming stuff to machine code. You may be able to see remnants of your data structures in the generated machine code, but those data structures don't exist there any longer. Also see my answer.
– Max Langhof
15 hours ago
1
@Afshin The compiler is only allowed to do these optimizations if it can prove that it doesn't cause problems. If you think of a situation where it causes problems, the compiler is not allowed to do it there. It also seems like you're alluding to type punning, which is Undefined Behavior according to the standard (in which case the compiler can do whatever the hell it wants).
– Max Langhof
15 hours ago
3
@geza It doesn't make much sense to think of compilers "removing" member variables in the first place. They already turn member variables into some machine code representation, and if they can prove that one of those is not needed in the current context, they can eliminate it there. Of course the compiler won't edit your source files or pretend that your source files looked differently or anything of that nature, so it's not "removing the member" in the narrow sense. A better question would be "do unused members always cause overhead?", which can be answered with "no".
– Max Langhof
15 hours ago
|
show 13 more comments
13
The code you describe is actually pretty trivial, and the compiler has enough visibility to prove that the only behaviour needed is call anoperator<<()
with parameter5
in bothf1()
andf2()
. And it is not only optimising out an unused member of thestruct
s. It is optimising the structs themselves out of existence. Provide a case where it can't optimise out the structs (e.g. create one, and pass its address to a function defined in another source file) and it is a fair bet the unused members will not be optimised out of existence.
– Peter
17 hours ago
1
But doesn't this optimization may lead to invalid memory access? Assume structures that has some zeroes at end of structure in order to make a uniform structure for supporting different data types(likesockaddr
). If it remove that unused part, it may lead to access violation.
– Afshin
17 hours ago
7
@Peter There are basically no structs in generated code anyway. As in, there is no difference in any generated code between two adjacentint
s and astruct X int a, b;
instance. Calling this "optimizing out the struct" is essentially incorrect because this is not an optimization, it's part of transforming stuff to machine code. You may be able to see remnants of your data structures in the generated machine code, but those data structures don't exist there any longer. Also see my answer.
– Max Langhof
15 hours ago
1
@Afshin The compiler is only allowed to do these optimizations if it can prove that it doesn't cause problems. If you think of a situation where it causes problems, the compiler is not allowed to do it there. It also seems like you're alluding to type punning, which is Undefined Behavior according to the standard (in which case the compiler can do whatever the hell it wants).
– Max Langhof
15 hours ago
3
@geza It doesn't make much sense to think of compilers "removing" member variables in the first place. They already turn member variables into some machine code representation, and if they can prove that one of those is not needed in the current context, they can eliminate it there. Of course the compiler won't edit your source files or pretend that your source files looked differently or anything of that nature, so it's not "removing the member" in the narrow sense. A better question would be "do unused members always cause overhead?", which can be answered with "no".
– Max Langhof
15 hours ago
13
13
The code you describe is actually pretty trivial, and the compiler has enough visibility to prove that the only behaviour needed is call an
operator<<()
with parameter 5
in both f1()
and f2()
. And it is not only optimising out an unused member of the struct
s. It is optimising the structs themselves out of existence. Provide a case where it can't optimise out the structs (e.g. create one, and pass its address to a function defined in another source file) and it is a fair bet the unused members will not be optimised out of existence.– Peter
17 hours ago
The code you describe is actually pretty trivial, and the compiler has enough visibility to prove that the only behaviour needed is call an
operator<<()
with parameter 5
in both f1()
and f2()
. And it is not only optimising out an unused member of the struct
s. It is optimising the structs themselves out of existence. Provide a case where it can't optimise out the structs (e.g. create one, and pass its address to a function defined in another source file) and it is a fair bet the unused members will not be optimised out of existence.– Peter
17 hours ago
1
1
But doesn't this optimization may lead to invalid memory access? Assume structures that has some zeroes at end of structure in order to make a uniform structure for supporting different data types(like
sockaddr
). If it remove that unused part, it may lead to access violation.– Afshin
17 hours ago
But doesn't this optimization may lead to invalid memory access? Assume structures that has some zeroes at end of structure in order to make a uniform structure for supporting different data types(like
sockaddr
). If it remove that unused part, it may lead to access violation.– Afshin
17 hours ago
7
7
@Peter There are basically no structs in generated code anyway. As in, there is no difference in any generated code between two adjacent
int
s and a struct X int a, b;
instance. Calling this "optimizing out the struct" is essentially incorrect because this is not an optimization, it's part of transforming stuff to machine code. You may be able to see remnants of your data structures in the generated machine code, but those data structures don't exist there any longer. Also see my answer.– Max Langhof
15 hours ago
@Peter There are basically no structs in generated code anyway. As in, there is no difference in any generated code between two adjacent
int
s and a struct X int a, b;
instance. Calling this "optimizing out the struct" is essentially incorrect because this is not an optimization, it's part of transforming stuff to machine code. You may be able to see remnants of your data structures in the generated machine code, but those data structures don't exist there any longer. Also see my answer.– Max Langhof
15 hours ago
1
1
@Afshin The compiler is only allowed to do these optimizations if it can prove that it doesn't cause problems. If you think of a situation where it causes problems, the compiler is not allowed to do it there. It also seems like you're alluding to type punning, which is Undefined Behavior according to the standard (in which case the compiler can do whatever the hell it wants).
– Max Langhof
15 hours ago
@Afshin The compiler is only allowed to do these optimizations if it can prove that it doesn't cause problems. If you think of a situation where it causes problems, the compiler is not allowed to do it there. It also seems like you're alluding to type punning, which is Undefined Behavior according to the standard (in which case the compiler can do whatever the hell it wants).
– Max Langhof
15 hours ago
3
3
@geza It doesn't make much sense to think of compilers "removing" member variables in the first place. They already turn member variables into some machine code representation, and if they can prove that one of those is not needed in the current context, they can eliminate it there. Of course the compiler won't edit your source files or pretend that your source files looked differently or anything of that nature, so it's not "removing the member" in the narrow sense. A better question would be "do unused members always cause overhead?", which can be answered with "no".
– Max Langhof
15 hours ago
@geza It doesn't make much sense to think of compilers "removing" member variables in the first place. They already turn member variables into some machine code representation, and if they can prove that one of those is not needed in the current context, they can eliminate it there. Of course the compiler won't edit your source files or pretend that your source files looked differently or anything of that nature, so it's not "removing the member" in the narrow sense. A better question would be "do unused members always cause overhead?", which can be answered with "no".
– Max Langhof
15 hours ago
|
show 13 more comments
It's important to realize that the code the compiler produces has no actual knowledge of your data structures (because such a thing doesn't exist on assembly level), and neither does the optimizer. The compiler only produces code for each function, not data structures.
Ok, it also writes constant data sections and such.
Based on that, we can already say that the optimizer won't "remove" or "eliminate" members, because it doesn't output data structures. It outputs code, which may or may not use the members, and among its goals is saving memory or cycles by eliminating pointless uses (i.e. writes/reads) of the members.
The gist of it is that "if the compiler can prove within the scope of a function (including functions that were inlined into it) that the unused member makes no difference for how the function operates (and what it returns) then chances are good that the presence of the member causes no overhead".
As you make the interactions of a function with the outside world more complicated/unclear to the compiler (take/return more complex data structures, e.g. a std::vector<Foo>
, hide the definition of a function in a different compilation unit, forbid/disincentivize inlining etc.), it becomes more and more likely that the compiler cannot prove that the unused member has no effect.
There are no hard rules here because it all depends on the optimizations the compiler makes, but as long as you do trivial things (such as shown in YSC's answer) it's very likely that no overhead will be present, whereas doing complicated things (e.g. returning a std::vector<Foo>
from a function too large for inlining) will probably incur the overhead.
To illustrate the point, consider this example:
struct Foo
int var1 = 3;
int var2 = 4;
int var3 = 5;
;
int test()
Foo foo;
std::array<char, sizeof(Foo)> arr;
std::memcpy(&arr, &foo, sizeof(Foo));
return arr[0] + arr[4];
We do non-trivial things here (take addresses, inspect and add bytes from the byte representation) and yet the optimizer can figure out that the result is always the same on this platform:
test(): # @test()
mov eax, 7
ret
Not only did the members of Foo
not occupy any memory, a Foo
didn't even come into existence! If there are other usages that can't be optimized then e.g. sizeof(Foo)
might matter - but only for that segment of code! If all usages could be optimized like this then the existence of e.g. var3
does not influence the generated code. But even if it is used somewhere else, test()
would remain optimized!
In short: Each usage of Foo
is optimized independently. Some may use more memory because of an unneeded member, some may not. Consult your compiler manual for more details.
add a comment |
It's important to realize that the code the compiler produces has no actual knowledge of your data structures (because such a thing doesn't exist on assembly level), and neither does the optimizer. The compiler only produces code for each function, not data structures.
Ok, it also writes constant data sections and such.
Based on that, we can already say that the optimizer won't "remove" or "eliminate" members, because it doesn't output data structures. It outputs code, which may or may not use the members, and among its goals is saving memory or cycles by eliminating pointless uses (i.e. writes/reads) of the members.
The gist of it is that "if the compiler can prove within the scope of a function (including functions that were inlined into it) that the unused member makes no difference for how the function operates (and what it returns) then chances are good that the presence of the member causes no overhead".
As you make the interactions of a function with the outside world more complicated/unclear to the compiler (take/return more complex data structures, e.g. a std::vector<Foo>
, hide the definition of a function in a different compilation unit, forbid/disincentivize inlining etc.), it becomes more and more likely that the compiler cannot prove that the unused member has no effect.
There are no hard rules here because it all depends on the optimizations the compiler makes, but as long as you do trivial things (such as shown in YSC's answer) it's very likely that no overhead will be present, whereas doing complicated things (e.g. returning a std::vector<Foo>
from a function too large for inlining) will probably incur the overhead.
To illustrate the point, consider this example:
struct Foo
int var1 = 3;
int var2 = 4;
int var3 = 5;
;
int test()
Foo foo;
std::array<char, sizeof(Foo)> arr;
std::memcpy(&arr, &foo, sizeof(Foo));
return arr[0] + arr[4];
We do non-trivial things here (take addresses, inspect and add bytes from the byte representation) and yet the optimizer can figure out that the result is always the same on this platform:
test(): # @test()
mov eax, 7
ret
Not only did the members of Foo
not occupy any memory, a Foo
didn't even come into existence! If there are other usages that can't be optimized then e.g. sizeof(Foo)
might matter - but only for that segment of code! If all usages could be optimized like this then the existence of e.g. var3
does not influence the generated code. But even if it is used somewhere else, test()
would remain optimized!
In short: Each usage of Foo
is optimized independently. Some may use more memory because of an unneeded member, some may not. Consult your compiler manual for more details.
add a comment |
It's important to realize that the code the compiler produces has no actual knowledge of your data structures (because such a thing doesn't exist on assembly level), and neither does the optimizer. The compiler only produces code for each function, not data structures.
Ok, it also writes constant data sections and such.
Based on that, we can already say that the optimizer won't "remove" or "eliminate" members, because it doesn't output data structures. It outputs code, which may or may not use the members, and among its goals is saving memory or cycles by eliminating pointless uses (i.e. writes/reads) of the members.
The gist of it is that "if the compiler can prove within the scope of a function (including functions that were inlined into it) that the unused member makes no difference for how the function operates (and what it returns) then chances are good that the presence of the member causes no overhead".
As you make the interactions of a function with the outside world more complicated/unclear to the compiler (take/return more complex data structures, e.g. a std::vector<Foo>
, hide the definition of a function in a different compilation unit, forbid/disincentivize inlining etc.), it becomes more and more likely that the compiler cannot prove that the unused member has no effect.
There are no hard rules here because it all depends on the optimizations the compiler makes, but as long as you do trivial things (such as shown in YSC's answer) it's very likely that no overhead will be present, whereas doing complicated things (e.g. returning a std::vector<Foo>
from a function too large for inlining) will probably incur the overhead.
To illustrate the point, consider this example:
struct Foo
int var1 = 3;
int var2 = 4;
int var3 = 5;
;
int test()
Foo foo;
std::array<char, sizeof(Foo)> arr;
std::memcpy(&arr, &foo, sizeof(Foo));
return arr[0] + arr[4];
We do non-trivial things here (take addresses, inspect and add bytes from the byte representation) and yet the optimizer can figure out that the result is always the same on this platform:
test(): # @test()
mov eax, 7
ret
Not only did the members of Foo
not occupy any memory, a Foo
didn't even come into existence! If there are other usages that can't be optimized then e.g. sizeof(Foo)
might matter - but only for that segment of code! If all usages could be optimized like this then the existence of e.g. var3
does not influence the generated code. But even if it is used somewhere else, test()
would remain optimized!
In short: Each usage of Foo
is optimized independently. Some may use more memory because of an unneeded member, some may not. Consult your compiler manual for more details.
It's important to realize that the code the compiler produces has no actual knowledge of your data structures (because such a thing doesn't exist on assembly level), and neither does the optimizer. The compiler only produces code for each function, not data structures.
Ok, it also writes constant data sections and such.
Based on that, we can already say that the optimizer won't "remove" or "eliminate" members, because it doesn't output data structures. It outputs code, which may or may not use the members, and among its goals is saving memory or cycles by eliminating pointless uses (i.e. writes/reads) of the members.
The gist of it is that "if the compiler can prove within the scope of a function (including functions that were inlined into it) that the unused member makes no difference for how the function operates (and what it returns) then chances are good that the presence of the member causes no overhead".
As you make the interactions of a function with the outside world more complicated/unclear to the compiler (take/return more complex data structures, e.g. a std::vector<Foo>
, hide the definition of a function in a different compilation unit, forbid/disincentivize inlining etc.), it becomes more and more likely that the compiler cannot prove that the unused member has no effect.
There are no hard rules here because it all depends on the optimizations the compiler makes, but as long as you do trivial things (such as shown in YSC's answer) it's very likely that no overhead will be present, whereas doing complicated things (e.g. returning a std::vector<Foo>
from a function too large for inlining) will probably incur the overhead.
To illustrate the point, consider this example:
struct Foo
int var1 = 3;
int var2 = 4;
int var3 = 5;
;
int test()
Foo foo;
std::array<char, sizeof(Foo)> arr;
std::memcpy(&arr, &foo, sizeof(Foo));
return arr[0] + arr[4];
We do non-trivial things here (take addresses, inspect and add bytes from the byte representation) and yet the optimizer can figure out that the result is always the same on this platform:
test(): # @test()
mov eax, 7
ret
Not only did the members of Foo
not occupy any memory, a Foo
didn't even come into existence! If there are other usages that can't be optimized then e.g. sizeof(Foo)
might matter - but only for that segment of code! If all usages could be optimized like this then the existence of e.g. var3
does not influence the generated code. But even if it is used somewhere else, test()
would remain optimized!
In short: Each usage of Foo
is optimized independently. Some may use more memory because of an unneeded member, some may not. Consult your compiler manual for more details.
edited 12 hours ago
answered 17 hours ago
Max LanghofMax Langhof
10.9k11940
10.9k11940
add a comment |
add a comment |
The compiler is will only optimise away an unused member variable (especially a public one) if it can prove that removing the variable has no side effects and that no part of the program depends on the size of Foo
being the same.
I don't think any current compiler performs such optimisations unless the structure isn't really being used at all. Some compilers may at least warn about unused private variables but not usually for public ones.
1
And yet it does: godbolt.org/z/UJKguS + no compiler would warn for an unused data member.
– YSC
18 hours ago
@YSC clang++ does warn about unused data members and variables.
– Maxim Egorushkin
18 hours ago
2
@YSC I think that's a slightly different situation, its optimised the structure away completely and just prints 5 directly
– Alan Birtles
18 hours ago
4
@AlanBirtles I don't see how it is different. The compiler optimized everything from the object that has no effect on the observable behavior of the program. So your first sentence "the compiler is very unlikely to optimize awau an unused member variable" is wrong.
– YSC
17 hours ago
1
@YSC in real code where the structure is actually being used rather than just constructed for its side effects its probably more unlikely it would be optimised away
– Alan Birtles
17 hours ago
|
show 12 more comments
The compiler is will only optimise away an unused member variable (especially a public one) if it can prove that removing the variable has no side effects and that no part of the program depends on the size of Foo
being the same.
I don't think any current compiler performs such optimisations unless the structure isn't really being used at all. Some compilers may at least warn about unused private variables but not usually for public ones.
1
And yet it does: godbolt.org/z/UJKguS + no compiler would warn for an unused data member.
– YSC
18 hours ago
@YSC clang++ does warn about unused data members and variables.
– Maxim Egorushkin
18 hours ago
2
@YSC I think that's a slightly different situation, its optimised the structure away completely and just prints 5 directly
– Alan Birtles
18 hours ago
4
@AlanBirtles I don't see how it is different. The compiler optimized everything from the object that has no effect on the observable behavior of the program. So your first sentence "the compiler is very unlikely to optimize awau an unused member variable" is wrong.
– YSC
17 hours ago
1
@YSC in real code where the structure is actually being used rather than just constructed for its side effects its probably more unlikely it would be optimised away
– Alan Birtles
17 hours ago
|
show 12 more comments
The compiler is will only optimise away an unused member variable (especially a public one) if it can prove that removing the variable has no side effects and that no part of the program depends on the size of Foo
being the same.
I don't think any current compiler performs such optimisations unless the structure isn't really being used at all. Some compilers may at least warn about unused private variables but not usually for public ones.
The compiler is will only optimise away an unused member variable (especially a public one) if it can prove that removing the variable has no side effects and that no part of the program depends on the size of Foo
being the same.
I don't think any current compiler performs such optimisations unless the structure isn't really being used at all. Some compilers may at least warn about unused private variables but not usually for public ones.
edited 17 hours ago
answered 18 hours ago
Alan BirtlesAlan Birtles
9,28411035
9,28411035
1
And yet it does: godbolt.org/z/UJKguS + no compiler would warn for an unused data member.
– YSC
18 hours ago
@YSC clang++ does warn about unused data members and variables.
– Maxim Egorushkin
18 hours ago
2
@YSC I think that's a slightly different situation, its optimised the structure away completely and just prints 5 directly
– Alan Birtles
18 hours ago
4
@AlanBirtles I don't see how it is different. The compiler optimized everything from the object that has no effect on the observable behavior of the program. So your first sentence "the compiler is very unlikely to optimize awau an unused member variable" is wrong.
– YSC
17 hours ago
1
@YSC in real code where the structure is actually being used rather than just constructed for its side effects its probably more unlikely it would be optimised away
– Alan Birtles
17 hours ago
|
show 12 more comments
1
And yet it does: godbolt.org/z/UJKguS + no compiler would warn for an unused data member.
– YSC
18 hours ago
@YSC clang++ does warn about unused data members and variables.
– Maxim Egorushkin
18 hours ago
2
@YSC I think that's a slightly different situation, its optimised the structure away completely and just prints 5 directly
– Alan Birtles
18 hours ago
4
@AlanBirtles I don't see how it is different. The compiler optimized everything from the object that has no effect on the observable behavior of the program. So your first sentence "the compiler is very unlikely to optimize awau an unused member variable" is wrong.
– YSC
17 hours ago
1
@YSC in real code where the structure is actually being used rather than just constructed for its side effects its probably more unlikely it would be optimised away
– Alan Birtles
17 hours ago
1
1
And yet it does: godbolt.org/z/UJKguS + no compiler would warn for an unused data member.
– YSC
18 hours ago
And yet it does: godbolt.org/z/UJKguS + no compiler would warn for an unused data member.
– YSC
18 hours ago
@YSC clang++ does warn about unused data members and variables.
– Maxim Egorushkin
18 hours ago
@YSC clang++ does warn about unused data members and variables.
– Maxim Egorushkin
18 hours ago
2
2
@YSC I think that's a slightly different situation, its optimised the structure away completely and just prints 5 directly
– Alan Birtles
18 hours ago
@YSC I think that's a slightly different situation, its optimised the structure away completely and just prints 5 directly
– Alan Birtles
18 hours ago
4
4
@AlanBirtles I don't see how it is different. The compiler optimized everything from the object that has no effect on the observable behavior of the program. So your first sentence "the compiler is very unlikely to optimize awau an unused member variable" is wrong.
– YSC
17 hours ago
@AlanBirtles I don't see how it is different. The compiler optimized everything from the object that has no effect on the observable behavior of the program. So your first sentence "the compiler is very unlikely to optimize awau an unused member variable" is wrong.
– YSC
17 hours ago
1
1
@YSC in real code where the structure is actually being used rather than just constructed for its side effects its probably more unlikely it would be optimised away
– Alan Birtles
17 hours ago
@YSC in real code where the structure is actually being used rather than just constructed for its side effects its probably more unlikely it would be optimised away
– Alan Birtles
17 hours ago
|
show 12 more comments
In general, you have to assume that you get what you have asked for, i.e., the "unused" member variables are there.
Since in your example both members are public
, the compiler cannot know if some code (particularly from other tranlation units = other *.cpp files, which are compiled separately and then linked) would access the "unused" member.
The answer of YSC gives a very simple example, where the class type is only used as a variable of automatric storage duration and where no pointer to that variable is taken. There, the compiler can inline all the code and can then eliminate all the dead code.
If you have interfaces between functions defined in different translation units, typically the compiler does not know anything. The interfaces follow typically some predefined ABI (like that) such that different object files can be linked together without any problems. Typically ABIs do not make a difference if a member is used or not. So, in such cases the second member has to be physically in the memory (unless eliminated out later by the linker).
And as long as you are within the boundaries of the language, you cannot observe that any eliminiation happens. If you call sizeof(Foo)
, you will get 2*sizeof(int)
. If you create an array of Foo
s, the distance between the beginnings of two consecutive objects of Foo
is always sizeof(Foo)
bytes.
Your type is a standard layout type, which means that you can also access on members based on compile-time computed offsets (cf. the offsetof
macro). Moreover, you can inspect the byte-by-byte representation of the object by copying onto an array of char
using std::memcpy
. In all these cases, the second member can be observed to be there.
Comments are not for extended discussion; this conversation has been moved to chat.
– Cody Gray♦
10 hours ago
+1: only aggressive whole-program optimization could possibly adjust data layout (including compile-time sizes and offsets) for cases where a local struct object isn't optimized away entirely, .gcc -fwhole-program -O3 *.c
could in theory do it, but in practice probably won't. (e.g. in case the program makes some assumptions about what exact valuesizeof()
has on this target, and because it's a really complicated optimization that programmers should do by hand if they want it.)
– Peter Cordes
5 hours ago
add a comment |
In general, you have to assume that you get what you have asked for, i.e., the "unused" member variables are there.
Since in your example both members are public
, the compiler cannot know if some code (particularly from other tranlation units = other *.cpp files, which are compiled separately and then linked) would access the "unused" member.
The answer of YSC gives a very simple example, where the class type is only used as a variable of automatric storage duration and where no pointer to that variable is taken. There, the compiler can inline all the code and can then eliminate all the dead code.
If you have interfaces between functions defined in different translation units, typically the compiler does not know anything. The interfaces follow typically some predefined ABI (like that) such that different object files can be linked together without any problems. Typically ABIs do not make a difference if a member is used or not. So, in such cases the second member has to be physically in the memory (unless eliminated out later by the linker).
And as long as you are within the boundaries of the language, you cannot observe that any eliminiation happens. If you call sizeof(Foo)
, you will get 2*sizeof(int)
. If you create an array of Foo
s, the distance between the beginnings of two consecutive objects of Foo
is always sizeof(Foo)
bytes.
Your type is a standard layout type, which means that you can also access on members based on compile-time computed offsets (cf. the offsetof
macro). Moreover, you can inspect the byte-by-byte representation of the object by copying onto an array of char
using std::memcpy
. In all these cases, the second member can be observed to be there.
Comments are not for extended discussion; this conversation has been moved to chat.
– Cody Gray♦
10 hours ago
+1: only aggressive whole-program optimization could possibly adjust data layout (including compile-time sizes and offsets) for cases where a local struct object isn't optimized away entirely, .gcc -fwhole-program -O3 *.c
could in theory do it, but in practice probably won't. (e.g. in case the program makes some assumptions about what exact valuesizeof()
has on this target, and because it's a really complicated optimization that programmers should do by hand if they want it.)
– Peter Cordes
5 hours ago
add a comment |
In general, you have to assume that you get what you have asked for, i.e., the "unused" member variables are there.
Since in your example both members are public
, the compiler cannot know if some code (particularly from other tranlation units = other *.cpp files, which are compiled separately and then linked) would access the "unused" member.
The answer of YSC gives a very simple example, where the class type is only used as a variable of automatric storage duration and where no pointer to that variable is taken. There, the compiler can inline all the code and can then eliminate all the dead code.
If you have interfaces between functions defined in different translation units, typically the compiler does not know anything. The interfaces follow typically some predefined ABI (like that) such that different object files can be linked together without any problems. Typically ABIs do not make a difference if a member is used or not. So, in such cases the second member has to be physically in the memory (unless eliminated out later by the linker).
And as long as you are within the boundaries of the language, you cannot observe that any eliminiation happens. If you call sizeof(Foo)
, you will get 2*sizeof(int)
. If you create an array of Foo
s, the distance between the beginnings of two consecutive objects of Foo
is always sizeof(Foo)
bytes.
Your type is a standard layout type, which means that you can also access on members based on compile-time computed offsets (cf. the offsetof
macro). Moreover, you can inspect the byte-by-byte representation of the object by copying onto an array of char
using std::memcpy
. In all these cases, the second member can be observed to be there.
In general, you have to assume that you get what you have asked for, i.e., the "unused" member variables are there.
Since in your example both members are public
, the compiler cannot know if some code (particularly from other tranlation units = other *.cpp files, which are compiled separately and then linked) would access the "unused" member.
The answer of YSC gives a very simple example, where the class type is only used as a variable of automatric storage duration and where no pointer to that variable is taken. There, the compiler can inline all the code and can then eliminate all the dead code.
If you have interfaces between functions defined in different translation units, typically the compiler does not know anything. The interfaces follow typically some predefined ABI (like that) such that different object files can be linked together without any problems. Typically ABIs do not make a difference if a member is used or not. So, in such cases the second member has to be physically in the memory (unless eliminated out later by the linker).
And as long as you are within the boundaries of the language, you cannot observe that any eliminiation happens. If you call sizeof(Foo)
, you will get 2*sizeof(int)
. If you create an array of Foo
s, the distance between the beginnings of two consecutive objects of Foo
is always sizeof(Foo)
bytes.
Your type is a standard layout type, which means that you can also access on members based on compile-time computed offsets (cf. the offsetof
macro). Moreover, you can inspect the byte-by-byte representation of the object by copying onto an array of char
using std::memcpy
. In all these cases, the second member can be observed to be there.
edited 6 hours ago
answered 13 hours ago
Handy999Handy999
64417
64417
Comments are not for extended discussion; this conversation has been moved to chat.
– Cody Gray♦
10 hours ago
+1: only aggressive whole-program optimization could possibly adjust data layout (including compile-time sizes and offsets) for cases where a local struct object isn't optimized away entirely, .gcc -fwhole-program -O3 *.c
could in theory do it, but in practice probably won't. (e.g. in case the program makes some assumptions about what exact valuesizeof()
has on this target, and because it's a really complicated optimization that programmers should do by hand if they want it.)
– Peter Cordes
5 hours ago
add a comment |
Comments are not for extended discussion; this conversation has been moved to chat.
– Cody Gray♦
10 hours ago
+1: only aggressive whole-program optimization could possibly adjust data layout (including compile-time sizes and offsets) for cases where a local struct object isn't optimized away entirely, .gcc -fwhole-program -O3 *.c
could in theory do it, but in practice probably won't. (e.g. in case the program makes some assumptions about what exact valuesizeof()
has on this target, and because it's a really complicated optimization that programmers should do by hand if they want it.)
– Peter Cordes
5 hours ago
Comments are not for extended discussion; this conversation has been moved to chat.
– Cody Gray♦
10 hours ago
Comments are not for extended discussion; this conversation has been moved to chat.
– Cody Gray♦
10 hours ago
+1: only aggressive whole-program optimization could possibly adjust data layout (including compile-time sizes and offsets) for cases where a local struct object isn't optimized away entirely, .
gcc -fwhole-program -O3 *.c
could in theory do it, but in practice probably won't. (e.g. in case the program makes some assumptions about what exact value sizeof()
has on this target, and because it's a really complicated optimization that programmers should do by hand if they want it.)– Peter Cordes
5 hours ago
+1: only aggressive whole-program optimization could possibly adjust data layout (including compile-time sizes and offsets) for cases where a local struct object isn't optimized away entirely, .
gcc -fwhole-program -O3 *.c
could in theory do it, but in practice probably won't. (e.g. in case the program makes some assumptions about what exact value sizeof()
has on this target, and because it's a really complicated optimization that programmers should do by hand if they want it.)– Peter Cordes
5 hours ago
add a comment |
It's dependent on your compiler and its optimization level.
In gcc, if you specify -O
, it will turn on the following optimization flags:
-fauto-inc-dec
-fbranch-count-reg
-fcombine-stack-adjustments
-fcompare-elim
-fcprop-registers
-fdce
-fdefer-pop
...
-fdce
stands for Dead Code Elimination.
You can use __attribute__((used))
to prevent gcc eliminating an unused variable with static storage:
This attribute, attached to a variable with static storage, means that
the variable must be emitted even if it appears that the variable is
not referenced.
When applied to a static data member of a C++ class template, the
attribute also means that the member is instantiated if the class
itself is instantiated.
New contributor
That's for static data members, not unused per-instance members (which don't get optimized away unless the whole object does). But yes, I guess that does count. BTW, eliminating unused static variables isn't dead code elimination, unless GCC bends the term.
– Peter Cordes
5 hours ago
add a comment |
It's dependent on your compiler and its optimization level.
In gcc, if you specify -O
, it will turn on the following optimization flags:
-fauto-inc-dec
-fbranch-count-reg
-fcombine-stack-adjustments
-fcompare-elim
-fcprop-registers
-fdce
-fdefer-pop
...
-fdce
stands for Dead Code Elimination.
You can use __attribute__((used))
to prevent gcc eliminating an unused variable with static storage:
This attribute, attached to a variable with static storage, means that
the variable must be emitted even if it appears that the variable is
not referenced.
When applied to a static data member of a C++ class template, the
attribute also means that the member is instantiated if the class
itself is instantiated.
New contributor
That's for static data members, not unused per-instance members (which don't get optimized away unless the whole object does). But yes, I guess that does count. BTW, eliminating unused static variables isn't dead code elimination, unless GCC bends the term.
– Peter Cordes
5 hours ago
add a comment |
It's dependent on your compiler and its optimization level.
In gcc, if you specify -O
, it will turn on the following optimization flags:
-fauto-inc-dec
-fbranch-count-reg
-fcombine-stack-adjustments
-fcompare-elim
-fcprop-registers
-fdce
-fdefer-pop
...
-fdce
stands for Dead Code Elimination.
You can use __attribute__((used))
to prevent gcc eliminating an unused variable with static storage:
This attribute, attached to a variable with static storage, means that
the variable must be emitted even if it appears that the variable is
not referenced.
When applied to a static data member of a C++ class template, the
attribute also means that the member is instantiated if the class
itself is instantiated.
New contributor
It's dependent on your compiler and its optimization level.
In gcc, if you specify -O
, it will turn on the following optimization flags:
-fauto-inc-dec
-fbranch-count-reg
-fcombine-stack-adjustments
-fcompare-elim
-fcprop-registers
-fdce
-fdefer-pop
...
-fdce
stands for Dead Code Elimination.
You can use __attribute__((used))
to prevent gcc eliminating an unused variable with static storage:
This attribute, attached to a variable with static storage, means that
the variable must be emitted even if it appears that the variable is
not referenced.
When applied to a static data member of a C++ class template, the
attribute also means that the member is instantiated if the class
itself is instantiated.
New contributor
edited 15 hours ago
Toby Speight
17k134266
17k134266
New contributor
answered 17 hours ago
wonterwonter
845
845
New contributor
New contributor
That's for static data members, not unused per-instance members (which don't get optimized away unless the whole object does). But yes, I guess that does count. BTW, eliminating unused static variables isn't dead code elimination, unless GCC bends the term.
– Peter Cordes
5 hours ago
add a comment |
That's for static data members, not unused per-instance members (which don't get optimized away unless the whole object does). But yes, I guess that does count. BTW, eliminating unused static variables isn't dead code elimination, unless GCC bends the term.
– Peter Cordes
5 hours ago
That's for static data members, not unused per-instance members (which don't get optimized away unless the whole object does). But yes, I guess that does count. BTW, eliminating unused static variables isn't dead code elimination, unless GCC bends the term.
– Peter Cordes
5 hours ago
That's for static data members, not unused per-instance members (which don't get optimized away unless the whole object does). But yes, I guess that does count. BTW, eliminating unused static variables isn't dead code elimination, unless GCC bends the term.
– Peter Cordes
5 hours ago
add a comment |
The examples provided by other answers to this question which elide var2
are based on a single optimization technique: constant propagation, and subsequent elision of the whole structure (not the elision of just var2
). This is the simple case, and optimizing compilers do implement it.
For unmanaged C/C++ codes the answer is that the compiler will in general not elide var2
. As far as I know there is no support for such a C/C++ struct transformation in debugging information, and if the struct is accessible as a variable in a debugger then var2
cannot be elided. As far as I know no current C/C++ compiler can specialize functions according to elision of var2
, so if the struct is passed to or returned from a non-inlined function then var2
cannot be elided.
For managed languages such as C#/Java with a JIT compiler the compiler might be able to safely elide var2
because it can precisely track if it is being used and whether it escapes to unmanaged code. The physical size of the struct in managed languages can be different from its size reported to the programmer.
Year 2019 C/C++ compilers cannot elide var2
from the struct unless the whole struct variable is elided. For interesting cases of elision of var2
from the struct, the answer is: No.
Some future C/C++ compilers will be able to elide var2
from the struct, and the ecosystem built around the compilers will need to adapt to process elision information generated by compilers.
add a comment |
The examples provided by other answers to this question which elide var2
are based on a single optimization technique: constant propagation, and subsequent elision of the whole structure (not the elision of just var2
). This is the simple case, and optimizing compilers do implement it.
For unmanaged C/C++ codes the answer is that the compiler will in general not elide var2
. As far as I know there is no support for such a C/C++ struct transformation in debugging information, and if the struct is accessible as a variable in a debugger then var2
cannot be elided. As far as I know no current C/C++ compiler can specialize functions according to elision of var2
, so if the struct is passed to or returned from a non-inlined function then var2
cannot be elided.
For managed languages such as C#/Java with a JIT compiler the compiler might be able to safely elide var2
because it can precisely track if it is being used and whether it escapes to unmanaged code. The physical size of the struct in managed languages can be different from its size reported to the programmer.
Year 2019 C/C++ compilers cannot elide var2
from the struct unless the whole struct variable is elided. For interesting cases of elision of var2
from the struct, the answer is: No.
Some future C/C++ compilers will be able to elide var2
from the struct, and the ecosystem built around the compilers will need to adapt to process elision information generated by compilers.
add a comment |
The examples provided by other answers to this question which elide var2
are based on a single optimization technique: constant propagation, and subsequent elision of the whole structure (not the elision of just var2
). This is the simple case, and optimizing compilers do implement it.
For unmanaged C/C++ codes the answer is that the compiler will in general not elide var2
. As far as I know there is no support for such a C/C++ struct transformation in debugging information, and if the struct is accessible as a variable in a debugger then var2
cannot be elided. As far as I know no current C/C++ compiler can specialize functions according to elision of var2
, so if the struct is passed to or returned from a non-inlined function then var2
cannot be elided.
For managed languages such as C#/Java with a JIT compiler the compiler might be able to safely elide var2
because it can precisely track if it is being used and whether it escapes to unmanaged code. The physical size of the struct in managed languages can be different from its size reported to the programmer.
Year 2019 C/C++ compilers cannot elide var2
from the struct unless the whole struct variable is elided. For interesting cases of elision of var2
from the struct, the answer is: No.
Some future C/C++ compilers will be able to elide var2
from the struct, and the ecosystem built around the compilers will need to adapt to process elision information generated by compilers.
The examples provided by other answers to this question which elide var2
are based on a single optimization technique: constant propagation, and subsequent elision of the whole structure (not the elision of just var2
). This is the simple case, and optimizing compilers do implement it.
For unmanaged C/C++ codes the answer is that the compiler will in general not elide var2
. As far as I know there is no support for such a C/C++ struct transformation in debugging information, and if the struct is accessible as a variable in a debugger then var2
cannot be elided. As far as I know no current C/C++ compiler can specialize functions according to elision of var2
, so if the struct is passed to or returned from a non-inlined function then var2
cannot be elided.
For managed languages such as C#/Java with a JIT compiler the compiler might be able to safely elide var2
because it can precisely track if it is being used and whether it escapes to unmanaged code. The physical size of the struct in managed languages can be different from its size reported to the programmer.
Year 2019 C/C++ compilers cannot elide var2
from the struct unless the whole struct variable is elided. For interesting cases of elision of var2
from the struct, the answer is: No.
Some future C/C++ compilers will be able to elide var2
from the struct, and the ecosystem built around the compilers will need to adapt to process elision information generated by compilers.
answered 9 hours ago
atomsymbolatomsymbol
15648
15648
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55060820%2fdoes-an-unused-member-variable-take-up-memory%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
7
There is a metric ton of low level driver code out there that specifically add do-nothing struct members for padding to match hardware data frame sizes and as a hack to get desired memory alignment. If a compiler started optimizing these out there would be much breakage.
– Andy Brown
18 hours ago
2
@Andy they're not really do-nothing as the address of the following data members is evaluated. This means the existence of those padding members do have an observable behavior on the program. Here,
var2
doesn't.– YSC
17 hours ago
2
I would be surprised if the compiler could optimize it away given that any compilation unit addressing such a struct might get linked to another compilation unit using the same struct and the compiler can't know if the separate compilation unit addresses the member or not.
– Galik
17 hours ago
2
@geza
sizeof(Foo)
cannot decrease by definition - if you printsizeof(Foo)
it must yield8
(on common platforms). Compilers can optimize away the space used byvar2
(no matter if throughnew
or on the stack or in function calls...) in any context they find it reasonable, even without LTO or whole program optimization. Where that is not possible they won't do it, as with just about any other optimization. I believe the edit to the accepted answer makes it significantly less likely to be misled by it.– Max Langhof
15 hours ago
@geza I think it all depends on what style of programming is used. In template-heavy code, you have so many tiny functions and tiny structs which only exist to be used once, that the trivial cases are pretty much everywhere.
– hegel5000
13 hours ago