Scientific Computing - Debugging Tools

Pages: 12
Greetings!

As a PhD student, I usually write programs from the domains of physical simulations, numerical algorithms or data science methods.
Examples of code snippets I deal with can be found in my previous threads
(eg. http://www.cplusplus.com/forum/beginner/283748/ )

It usually goes like this:
Spend some time coding.
Spend some time debugging.

The problem is that my debugging takes too much time, in particular debugging of logic errors, i.e. when my simulation executes but the results are "physically" impossible.
I debug the primitive way, by going through the code again and again, checking whether everything makes sense, and by printing out values of variables and so on.

I am aware that there are lots of powerful debugging tools. I want to ask for advice on the right tool for me, given that a) I am still very much a C++ novice, b) my programs are not that large (they can be coded up in a day or two), c) my domain is scientific computing.

edit:
I am on Linux (Ubuntu).

Any obvious hints?

Best,
PiF
Last edited on
Without knowing what platform you're on, it's impossible to recommend specific debuggers.

If you're on Windows, IMO Visual Studio's debugger is one of the best available.

A suggestion for less debugging time to write a small snippet, test the snippet, don't continue coding until you're sure the snippet is right. You should find this cuts down your debugging time substantially.

Thanks AbstractionAnon.
I edit my post to name the platform.

Do you use an IDE, if so there is probably a debugger inside it.

I debug the primitive way, by going through the code again and again, checking whether everything makes sense, and by printing out values of variables and so on.


Yes, that is definitely "Poor man's debugging": one can spend waste a lot of time doing that.

With a proper debugger, one can create a watchlist of variables, step through the code, see how the values change, deduce what went wrong.

Eventually one learns to do defensive programming: check things before they become a problem, or use a some other means to make sure that it will never be a problem. One trivial example is to check denominators are not zero, or close enough to zero to be a problem. Another example is null pointers. And the Elephant in the room is validating your input data. Edit: It may be possible to prove mathematically that an algorithm will work provided that the input is valid.

Good Luck !!!!!

Edit:

In terms of whether any debugger is better than another, I think any of them will do. If using the shell to compile, then the only ones I know of are gdb with g++ and lldb with clang++. All of the IDE's I have used wrap those debuggers in a GUI.
Last edited on
My opinion is that GDB is the best choice for Linux. There is also LLDB, but it is less mature and not as capable. These debuggers are primarily command line tools, but there are lots of interfaces that "wrap" them into a GUI.

Besides GDB and LLDB, there aren't many other decent options.

If you do not use a capable IDE, GDB does provide an integrated text-based user interface (the specific command-line option is -tui). This is useful if your editor's debugger integration doesn't exist or it's bad, because you will want to see multiple streams of information in a way that a standard console program can't easily display.

Most importantly, try to sit next to an expert who's using a debugger to solve real problems. This is so important because the debugger is almost useless if you don't apply it to a problem effectively - and there's no better way to learn than by watching a pro.
Last edited on
No tools, but some thoughts...
break it down. Have you checked each function by itself, with some known in and outs? Do you have any kind of known input / output scenario where you have the 'right' answer end to end?
You can't just debug it until it comes out 'sorta kinda like expected'. That does not prove its debugged, it just proves you got it closer or that you got lucky and 2 mistakes neutralized each other and gave OKish output. With the kind of work you do, even saying that 'this is wrong' could be iffy. Results that don't match expectations may or may not be wrong, in other words -- could be errors in the input data, could be your expectations were not right, whatever.
You have to have some way to say yes, absolutely, its good or bad.
One thing you can do is lock in your random numbers. Seed it with the same seed and inputs until you get it debugged. And if humanly possible, make a small scenario you can get a end to end result for given the fixed (for now) random stream you are using -- leave nothing to chance and run the math on paper (or in a scripted math tool like matlab or maple or something).
> b) my programs are not that large (they can be coded up in a day or two)
And how much time do you spend debugging?

A day coding is borderline where you should be spending some time up front on a design.
It doesn't have to be anything too heavy. An hour spent sketching out the major functions and data structures will be worth it.

Also, test as you write. As you complete a function, write a test for it.
A single function test will be a lot easier to debug for two reasons:
- it's a smaller amount of code
- it's fresh in your mind

Anything you can do to reduce the time between when the bug was introduced and when you find it is worth doing.


It's also worth learning some actual C++, like classes.

For example, if you start cascading a bunch of parameters from one function to another like in the thread you quoted, it's time to start encapsulating those common parameters into a class.
Eg.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
class Calc {
    params theta;
    double sig0, sig1, sig2;
    double a1, a2;
  public:
    Calc(const params &theta, double sig0, double sig1, double sig2, double a1, double a2)
        : theta{theta}, sig0{sig0}, sig1{sig1}, sig2{sig2}, a1{a1}, a2{a2}
        {}
    double U_pot_GM(const vector <double>& Xdata);
  private:
    double likelihood_GM(double x);
};

double Calc::U_pot_GM(const vector <double>& Xdata) {
    double U = 0;
    for(int i=0; i<Xdata.size(); ++i) {
        U += log( likelihood_GM(Xdata[i]) );    // sum over log-likelihoods
    }
    U -= (theta.mu1*theta.mu1 + theta.mu2*theta.mu2)/(2*sig0*sig0) + log(2*PI*sig0*sig0);  // log-prior part.
    return -1*U;
}

double Calc::likelihood_GM(const double x) {
    double e1 = exp( -1*(x-theta.mu1)*(x-theta.mu1)/(2*sig1*sig1) );
    double e2 = exp( -1*(x-theta.mu2)*(x-theta.mu2)/(2*sig2*sig2) );
    double p = a1/(sqrt(2*PI)*sig1) * e1 + a2/(sqrt(2*PI)*sig2) * e2;	
    return p;
}

// TODO:
// Add get_noisy_force_GM as well.


// external user of the class
double doit ( const vector <double>& Xdata ) {
    params parm;
    Calc calc(parm, 0, 1, 2, 3, 4);
    return calc.U_pot_GM(Xdata);
}


This is actually more efficient for the code, since you're not spending time pushing half a dozen parameters from one function to the next.
https://www.learncpp.com/cpp-tutorial/welcome-to-object-oriented-programming/
One of the most important points I can suggest in coding is to initialize variables when defined. It is common to initialise variables to their default value, but for those that are supposed to be set later it is perhaps better to initialise them to a possibly unique value that isn't expected. That way it's easy to see when/if variables are properly initialised. For float/double etc then perhaps initialise them to NAN

There is a famous old story (sorry, but I don't now know the link) re a PhD student who wrote a large application in Fortran coded across several sub-programs that undertook some very sophisticated data analysis. He was very near the end of completing his thesis and everything was going great. He came upon a problem with some Fortran code that he didn't understand. He contacted a Fortran support group, they looked at the code and gave him the killer blow that his code was analysing data passed by common that had not been initialised! His whole thesis had been based upon analysing random memory data! He had no thesis and had to withdraw from the PhD after over 3 years of work!

PS. Also never check for equality if one of the numbers is a floating point (float or double). Check that the absolute difference between them is less than a specified number (usually called epsilon - which depends upon the context).
Last edited on
Hi guys,

thank you all for your many comments!

To the tools:
TheIdeasMan wrote:

In terms of whether any debugger is better than another, I think any of them will do. If using the shell to compile, then the only ones I know of are gdb with g++ and lldb with clang++. All of the IDE's I have used wrap those debuggers in a GUI.
mbozzi wrote:

My opinion is that GDB is the best choice for Linux. There is also LLDB, but it is less mature and not as capable. These debuggers are primarily command line tools, but there are lots of interfaces that "wrap" them into a GUI.

Besides GDB and LLDB, there aren't many other decent options.

If you do not use a capable IDE, GDB does provide an integrated text-based user interface (the specific command-line option is -tui). This is useful if your editor's debugger integration doesn't exist or it's bad, because you will want to see multiple streams of information in a way that a standard console program can't easily display.
I am currently not using an IDE. When I first learned C++, I used Code:Blocks. But I did not like the fact that it created so many project files (maybe I was not using it correctly lol). Also, I wanted to get more comfortable with using the terminal (as I frequently need to run my codes on a remote server that I can only access via the terminal).
Maybe I should start using an IDE for debugging purposes only. I assume it is easier to use than terminal-based ones?

TheIdeasMan wrote:

Eventually one learns to do defensive programming: check things before they become a problem, or use a some other means to make sure that it will never be a problem. One trivial example is to check denominators are not zero, or close enough to zero to be a problem. Another example is null pointers. And the Elephant in the room is validating your input data. Edit: It may be possible to prove mathematically that an algorithm will work provided that the input is valid.
Yes, I already try to think defensively and always ask myself "what could go wrong here". But I don't do enough testing...

jonnin wrote:

Have you checked each function by itself, with some known in and outs? Do you have any kind of known input / output scenario where you have the 'right' answer end to end?
You can't just debug it until it comes out 'sorta kinda like expected'. That does not prove its debugged, it just proves you got it closer or that you got lucky and 2 mistakes neutralized each other and gave OKish output. With the kind of work you do, even saying that 'this is wrong' could be iffy. Results that don't match expectations may or may not be wrong, in other words -- could be errors in the input data, could be your expectations were not right, whatever.
You have to have some way to say yes, absolutely, its good or bad.
Yes, you are absolutely right... I barely do testing of the individual modules. I usually test the whole code for simpler settings (where I know the output) in order to debug. But this is not enough, it still takes me ages to find the problems. I need to start doing more testing of the individual components such as the force evaluation. But for those components, I often find it challenging to find input where I already know the output. It's ironically easier when considering the simulation as a whole.

@salem c
Thanks for the specific recommendation of using classes for my types of code. I am still expanding my C++ knowledge on the side and I already have some basics of OOP. But I currently don't feel confident enough in it to apply it in my work as I am scared of the possible bugs that might take me even longer to spot since I am less used to this paradigm.

seeplus wrote:

One of the most important points I can suggest in coding is to initialize variables when defined. It is common to initialise variables to their default value, but for those that are supposed to be set later it is perhaps better to initialise them to a possibly unique value that isn't expected. That way it's easy to see when/if variables are properly initialised. For float/double etc then perhaps initialise them to NAN
Interesting idea with the NANs. Although seeing NANs when printing out my variables have always been clear indicators to me that something goes terribly wrong, so I don't quite like the thought of introducing them myself.

There is a famous old story (sorry, but I don't now know the link) re a PhD student who wrote a large application in Fortran coded across several sub-programs that undertook some very sophisticated data analysis. He was very near the end of completing his thesis and everything was going great. He came upon a problem with some Fortran code that he didn't understand. He contacted a Fortran support group, they looked at the code and gave him the killer blow that his code was analysing data passed by common that had not been initialised! His whole thesis had been based upon analysing random memory data! He had no thesis and had to withdraw from the PhD after over 3 years of work!
This is my personal nightmare as well xD
Last year I worked through the master thesis of the former student of my supervisor. He was using Python and intended to train three neural networks independently of each other. He put them in a Python list like
1
2
Net my_net
network_list = [my_net] *3

and thought this would give him a list of three independent copies when, in actuality, it gives him a list of three copies of a reference to the same object. All his plots in the thesis turned out to be rubbish. He is long gone now working for a company... my supervisor never told him.
Debugging is so, so important...
With initialising variables, ideally if possible wait until you have a sensible value, then declare and initialise all in one statement. Otherwise initialise to NaN or an unexpected value like what seeplus was saying.

Compiler options are very important too. It's worth reading the manual for g++, there are a zillion options, but there are some useful ones that are not set by -Wall or -Wextra. The compiler can warn about uninitialised variables and many other things. In this way the compiler is your friend: any warnings are clues that you probably have something that will at least produce wrong results, or not work at all.

Actually, what options are you using to compile?


PhysicsIsFun wrote:

But I did not like the fact that it created so many project files (maybe I was not using it correctly lol).


Just wait until you start using cmake - it creates whole directories worth of files, even for a simple project. And using cmake is a good thing IMO. So if Code Blocks makes a bunch of files, don't worry about them. Although I found it useful to turn off pre-compiled header files, I once had a 50GB pch file - it seemed to have compiled the whole library I was using!!!!

I have used a variety of IDE's, there are pro and cons. Using the IDE is easier as a beginner, but sometimes learning the IDE itself takes some effort too. Using the shell is much harder, but worth it in the end. One doesn't get far without having to learn tools such as make or cmake which are like another programming language. The IDE's can incorporate make and cmake too, it's easy to edit a file to put ones options into.

So I have used KDevelop, Code Blocks, QTCreator, Eclipse and now Visual Studio Code. All of them were easy to use and setup. From a Gnu/Linux point of view, one can install them as flatpaks. I didn't like C::B because it doesn't have background compilation or version control.

At the moment I use VS Code with cmake, I am happy with it.

So give one of the IDE's a tryout, as well as uisng the shell, see how you get on.
Re your previous post https://cplusplus.com/forum/beginner/283748/

I would have coded these routines something like (assuming the maths/physics logic is correct!):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
// This is the energy routine, it needs to process the whole vector Xdata
double U_pot_GM(const params& theta, const vector <double>& Xdata, const double two_sigsig1, const double two_sigsig2,
	const double pref_exp1, const double pref_exp2,
	const double two_sigsig0, const double log_PI_two_sigsig0) {

	double U {};

	for (size_t i {}; i < Xdata.size(); ++i) {    //.size() returns type size_t which is unsigned. int is signed
		const auto& x { Xdata[i] };
		const auto x_minus_mu1 { x - theta.mu1 };
		const auto x_minus_mu2 { x - theta.mu2 };
		const auto e1 { exp(-(x_minus_mu1) * (x_minus_mu1) / (two_sigsig1)) };
		const auto e2 { exp(-(x_minus_mu2) * (x_minus_mu2) / (two_sigsig2)) };

		U += log(pref_exp1 * e1 + pref_exp2 * e2);
	}

	U -= log_PI_two_sigsig0 + (theta.mu1 * theta.mu1 + theta.mu2 * theta.mu2) / two_sigsig0;

	return -U;
}

// This is the force routine, it needs to process B random elements of the vector Xdata
forces get_noisy_force_GM(const params& theta, const vector <double>& Xdata, const size_t B, vector <int>& idx_arr, const double two_sigsig1,
	    const double two_sigsig2, const double pref_exp1, const double pref_exp2,
	    const double F_scale_1, const double F_scale_2, const double sigsig0) {

	forces F { 0, 0 };
	const auto size_minus_B { Xdata.size() - B };
	const auto scale { Xdata.size() / (B + 0.0) };

	if (Xdata.size() != B)
		for (size_t i { Xdata.size() - 1 }; i >= size_minus_B; --i) {
			const uniform_int_distribution<size_t> distrib(0, i);

			swap(idx_arr[i], idx_arr[distrib(twister)]);
		}

	for (size_t i { idx_arr.size() - 1 }; i >= size_minus_B; --i) {
		const auto& x { Xdata[idx_arr[i]] };
		const auto x_minus_mu1 { x - theta.mu1 };
		const auto x_minus_mu2 { x - theta.mu2 };
		const auto e1 { exp(-(x_minus_mu1) * (x_minus_mu1) / (two_sigsig1)) };
		const auto e2 { exp(-(x_minus_mu2) * (x_minus_mu2) / (two_sigsig2)) };
		const auto likeli_inv { 1 / (pref_exp1 * e1 + pref_exp2 * e2) };

		F.fmu1 += likeli_inv * e1 * (x_minus_mu1);
		F.fmu2 += likeli_inv * e2 * (x_minus_mu2);
	}

	F.fmu1 *= F_scale_1 * scale;
	F.fmu2 *= F_scale_2 * scale;

	F.fmu1 -= theta.mu1 / (sigsig0);
	F.fmu2 -= theta.mu2 / (sigsig0);

	return F;
}


This now compiles cleanly (on VS2022) without warnings. Another piece of advice is to never ignore warnings!
Last edited on
TheIdeasMan wrote:

Compiler options are very important too. It's worth reading the manual for g++, there are a zillion options, but there are some useful ones that are not set by -Wall or -Wextra. The compiler can warn about uninitialised variables and many other things. In this way the compiler is your friend: any warnings are clues that you probably have something that will at least produce wrong results, or not work at all.

Actually, what options are you using to compile?
I currently simply use the optimizer flag, i.e I compile with
 
g++ -O3 -o code.exe code.cpp
. Which flags would you recommend for my purposes?


I have used a variety of IDE's, there are pro and cons. Using the IDE is easier as a beginner, but sometimes learning the IDE itself takes some effort too. Using the shell is much harder, but worth it in the end. One doesn't get far without having to learn tools such as make or cmake which are like another programming language. The IDE's can incorporate make and cmake too, it's easy to edit a file to put ones options into.

So I have used KDevelop, Code Blocks, QTCreator, Eclipse and now Visual Studio Code. All of them were easy to use and setup. From a Gnu/Linux point of view, one can install them as flatpaks. I didn't like C::B because it doesn't have background compilation or version control.

At the moment I use VS Code with cmake, I am happy with it.


Yes, I will try out an IDE. Currently Google'ing for the right choice. Ideally, it would be user-friendly and come not only with a debugger but also with some kind of profiler that allows me to measure the runtime spent in certain functions (see again my previous threads where this was a big topic).

@seeplus
Thanks for the recommended optimization. Although I am a bit confused: It looks like you are creating variables inside the loop. I always thought it is more efficient to create them outside the loop and only change their value inside.
Last edited on
Unless you're counting every CPU cycle, it's best to define and initialise variables as/when they are required. For PODs unless you're doing multi-billion definitions it probably won't be noticeable.

Consider:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
#include <chrono>
#include <iostream>

template<class TimeUnit = std::chrono::milliseconds>
class Timer {
public:
	Timer() {
		m_start = std::chrono::steady_clock::now();
	}
	~Timer() {
		std::chrono::steady_clock::time_point stop = std::chrono::steady_clock::now();
		std::cout << "** Running time: " << std::chrono::duration_cast<TimeUnit>(stop - m_start).count() << '\n';
	}
private:
	std::chrono::steady_clock::time_point m_start;
};

int main() {
	constexpr size_t loops { 150'000'000 };

	{
		Timer<std::chrono::microseconds> t;

		size_t a {};
		size_t tot {};

		for (size_t i {}; i < loops; ++i) {
			a = i + 2;
			tot += a;
		}

		std::cout << tot << '\n';
	}

	{
		Timer<std::chrono::microseconds> t;

		size_t tot {};

		for (size_t i {}; i < loops; ++i) {
			const auto a { i + 2 };
			tot += a;
		}

		std::cout << tot << '\n';
	}
}


On my system I get:


11250000225000000
** Running time: 32029
11250000225000000
** Running time: 31947


which shows that defining the variable within the loop is slightly quicker! (All the usual caveats for such simple timing tests).

It may make a difference if the defined variable has an 'expensive' constructor.
Last edited on
Cheers @seeplus!

which shows that defining the variable within the loop is slightly quicker
Is there an intuitive explanation for this?

So defining primitive variables like doubles inside a loop is better not only in terms of possible bugs but maybe even faster.

Would vectors/arrays already count as having "complex" constructors? Certainly it can't be good to recreate a large vector again and again..
Well from this I'd say for PODs there's nothing in it (but others may offer a detailed explanation...)

For vectors etc which aren't PODs as I said it depends upon the cost of the constructor. If you're counting cpu cycles for absolute speed, then do some tests. Unless shown otherwise for specified reasons, it's good practice to always define and initialise when required. Once you have the program working and it appears to be slow, then things like this can be tried one at a time and only changed with a comment if the timings are really improved.
which shows that defining the variable within the loop is slightly quicker
Is there an intuitive explanation for this?


Could it because of the const ?
I am currently not using an IDE. When I first learned C++, I used Code:Blocks. But I did not like the fact that it created so many project files (maybe I was not using it correctly lol). Also, I wanted to get more comfortable with using the terminal (as I frequently need to run my codes on a remote server that I can only access via the terminal).

I don't use an IDE either. Even though I am currently working under Windows.

Maybe I should start using an IDE for debugging purposes only. I assume it is easier to use than terminal-based ones?
IMO both interfaces have advantages and disadvantages. I think it's valuable to have an interactive UI (not just a REPL) available for reasons discussed earlier but I wouldn't discourage using the REPL. Try each and see which works for you. You can always switch between them.

Which flags would you recommend for my purposes?

The compiler can help you catch some of your mistakes, if you ask it to help you.

At the very minimum, always enable compiler warnings with -Wall -Wextra -pedantic and (if practical) compile your code against a particular C++ standard. For example to compile against C++17 use -std=c++17 in addition to the flags mentioned above.

Note that you may want to use different flags while debugging your program. At the very least you will want to include debugging information into the compiled code, but in addition you might want to disable certain optimizations that interfere with a debugger, or enable certain compiler features that can detect problems but also incur a performance cost.

The minimum set of options for debugging could be something like
g++ -Wall -Wextra -pedantic -std=c++17 -g -Og -D_GLIBCXX_DEBUG=1 -ocode code.cpp

Additionally, you should pick up a second compiler, so you can compile your code with it and see if it produces any diagnostic messages. Often if one compiler doesn't catch a mistake or its output doesn't make sense, the other will have no problems identifying the problem.
Clang would be a good choice to complement GCC.

Also of potential interest is GCC's static analyzer, the option is -fanalyze, as well as Clang's sanitizers, which affect codegen to catch certain problems when they occur. That option is -fsanitize. There are also options to detect signed integer overflow, the option is -ftrapv, and countless others that might be useful.
Last edited on
pick up a second compiler, so you can compile your code with it and see if it produces any diagnostic messages.

Advice I had to be beat over the head with, using 2 or more different compilers is cheap but effective for finding some bugs. :)
Here are some of the warnings not enabled by -Wall , -Wextra or -pedantic, but still useful.

https://gcc.gnu.org/onlinedocs/gcc-12.1.0/gcc/Warning-Options.html#Warning-Options

-Werror turns all warnings into errors, forcing one to fix them.

-pedantic-errors

Give an error whenever the base standard (see -Wpedantic) requires a diagnostic, in some cases where there is undefined behavior at compile-time and in some other cases that do not prevent compilation of programs that are valid according to the standard. This is not equivalent to -Werror=pedantic, since there are errors enabled by this option and not enabled by the latter and vice versa.

-Wswitch-default

Warn whenever a switch statement does not have a default case.
-Wswitch-enum

Warn whenever a switch statement has an index of enumerated type and lacks a case for one or more of the named codes of that enumeration. case labels outside the enumeration range also provoke warnings when this option is used. The only difference between -Wswitch and this option is that this option gives a warning about an omitted enumeration code even if there is a default label.

-Wuninitialized

Warn if an object with automatic or allocated storage duration is used without having been initialized. In C++, also warn if a non-static reference or non-static const member appears in a class without constructors.

In addition, passing a pointer (or in C++, a reference) to an uninitialized object to a const-qualified argument of a built-in function known to read the object is also diagnosed by this warning. (-Wmaybe-uninitialized is issued for ordinary functions.)

If you want to warn about code that uses the uninitialized value of the variable in its own initializer, use the -Winit-self option.

These warnings occur for individual uninitialized elements of structure, union or array variables as well as for variables that are uninitialized as a whole. They do not occur for variables or elements declared volatile. Because these warnings depend on optimization, the exact variables or elements for which there are warnings depend on the precise optimization options and version of GCC used.

Note that there may be no warning about a variable that is used only to compute a value that itself is never used, because such computations may be deleted by data flow analysis before the warnings are printed.

In C++, this warning also warns about using uninitialized objects in member-initializer-lists. For example, GCC warns about b being uninitialized in the following snippet:

struct A {
int a;
int b;
A() : a(b) { }
};

-Wfloat-equal

Warn if floating-point values are used in equality comparisons.

The idea behind this is that sometimes it is convenient (for the programmer) to consider floating-point values as approximations to infinitely precise real numbers. If you are doing this, then you need to compute (by analyzing the code, or in some other way) the maximum or likely maximum error that the computation introduces, and allow for it when performing comparisons (and when producing output, but that’s a different problem). In particular, instead of testing for equality, you should check to see whether the two values have ranges that overlap; and this is done with the relational operators, so equality comparisons are probably mistaken.

-Wshadow

Warn whenever a local variable or type declaration shadows another variable, parameter, type, class member (in C++), or instance variable (in Objective-C) or whenever a built-in function is shadowed. Note that in C++, the compiler warns if a local variable shadows an explicit typedef, but not if it shadows a struct/class/enum. If this warning is enabled, it includes also all instances of local shadowing. This means that -Wno-shadow=local and -Wno-shadow=compatible-local are ignored when -Wshadow is used. Same as -Wshadow=global.

-Wcast-qual

Warn whenever a pointer is cast so as to remove a type qualifier from the target type. For example, warn if a const char * is cast to an ordinary char *.

Also warn when making a cast that introduces a type qualifier in an unsafe way. For example, casting char ** to const char ** is unsafe, as in this example:

-Wconversion

Warn for implicit conversions that may alter a value. This includes conversions between real and integer, like abs (x) when x is double; conversions between signed and unsigned, like unsigned ui = -1; and conversions to smaller types, like sqrtf (M_PI). Do not warn for explicit casts like abs ((int) x) and ui = (unsigned) -1, or if the value is not changed by the conversion like in abs (2.0). Warnings about conversions between signed and unsigned integers can be disabled by using -Wno-sign-conversion.

For C++, also warn for confusing overload resolution for user-defined conversions; and conversions that never use a type conversion operator: conversions to void, the same type, a base class or a reference to them. Warnings about conversions between signed and unsigned integers are disabled by default in C++ unless -Wsign-conversion is explicitly enabled.

Warnings about conversion from arithmetic on a small type back to that type are only given with -Warith-conversion.







Thanks @mbozzi and @TheIdeasMan for the detailed hints on compiler usage.
Some of the flags should indeed save me lots of time that would otherwise be spend on debugging.

I will try out some of them on my recent code and see what happens.
Pages: 12