C++ Review Questions

Pages: 1... 34567... 10
std::format was introduced in C++20, Mr. Z, so not surprising you'd never seen it before. Most C++ books and websites are pre-C++20.

Even Learn C++ glosses over C++20 with a no-tutorials-available brief page on what was added that isn't a complete rundown.

https://www.learncpp.com/cpp-tutorial/introduction-to-c20/

The new three-way comparison operator (<=>), AKA "the spaceship operator," is a radical way comparisons are done in C++. The first mention of the operator in "Beginning C++20" is in Chapter 4.
Thanks. I have gotten much better at heap & pointers. Tricky part is when you have an array of pointers...etc, sometimes it takes me a few tries to get it right without looking after I have not seen them for a while.

@George P
Thanks. Actually, that was the first thing that I went to was the operator overloading chapter, because I thought I was going to quickly breeze through it for a review and then I encountered spaceship...and thought to myself WT heck. Pretty cool & quick, but I have to reread it & finish that on a good day.

QUESTION 10)
My MS Windows calculator can add these two scientific notations and display the correct precision and not lose the smaller fractional number. The setprecision can display it too. So, a long double has 18-19 digits of precision but cannot display "3650000.000123" because some of the precision is lost for the exponent display & sign.

One trick that I can think of is to make a function/class with two double type variables, one that stores the number before the decimal & the other the number after the decimal. Split the longer number into 2 parts basically. We work out the mathematics and concatenate the output as strings to display and have all that transparent to the user & not have them think about the limitations.

So what do banks use for these calculations? Is there an int32_t/int64_t version for doubles? Are there 3rd party software? 32/64 bit computing is a limitation of the physical hardware & the operating system. Just how exactly are these precision boundaries pushed in the real world?

I just want this line or this "f3 = f1 + f2;" not to lose precision, any way to increase the # of bytes for memory allocation or alternative ways? :
cout << f1 + f2 << endl; //3.65e+06



1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#include <iostream>
#include <iomanip>
using namespace std;

int main()
{

	long double f1 = 3.65E6;
	//long double f1 = 3'650'000;
	
	long double f2 = 1.23E-4;
	//long double f2 = .000123;
	
	cout << f1 << endl;			//3.65e+06
	cout << f2 << endl;			//0.000123
	cout << f1 + f2 << endl;	//3.65e+06
	cout << setprecision(13) << f1 + f2 << endl;	//3650000.000123
	
	cout << sizeof(long double);		//16 bytes on 64bit compilation
	
	return 0;

}
Last edited on
> I just want this line or this "f3 = f1 + f2;" not to lose precision

Use a library; for example the boost multiprecision library.
https://www.boost.org/doc/libs/1_79_0/libs/multiprecision/doc/html/boost_multiprecision/intro.html

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#include <iostream>
#include <iomanip>
#include <boost/multiprecision/cpp_bin_float.hpp>

int main()
{
    using float_type = boost::multiprecision::cpp_bin_float_100 ; // precision: 100 decimal digits
    float_type f1 = 3.65E6;
    float_type f2 = 1.23E-4;

    std::cout << std::fixed ;

    for( int i = 0 ; i < 10 ; ++i )
    {
	    std::cout <<  std::setprecision(6+i*2) << f1 << " + " << f2 << " == " << f1+f2 << '\n' ;
	    f1 *= 100.0 ;
	    f2 /= 100.0 ;
    }
}


3650000.000000 + 0.000123 == 3650000.000123
365000000.00000000 + 0.00000123 == 365000000.00000123
36500000000.0000000000 + 0.0000000123 == 36500000000.0000000123
3650000000000.000000000000 + 0.000000000123 == 3650000000000.000000000123
365000000000000.00000000000000 + 0.00000000000123 == 365000000000000.00000000000123
36500000000000000.0000000000000000 + 0.0000000000000123 == 36500000000000000.0000000000000123
3650000000000000000.000000000000000000 + 0.000000000000000123 == 3650000000000000000.000000000000000123
365000000000000000000.00000000000000000000 + 0.00000000000000000123 == 365000000000000000000.00000000000000000123
36500000000000000000000.0000000000000000000000 + 0.0000000000000000000123 == 36500000000000000000000.0000000000000000000123
3650000000000000000000000.000000000000000000000000 + 0.000000000000000000000123 == 3650000000000000000000000.000000000000000000000123


http://coliru.stacked-crooked.com/a/8255b578e4f3d573
Last edited on
Thanks. Why wouldn't this have been built into C++ in this day & age?
A lot of what became C++ first was part of a 3rd party library. Boost is probably the main source for future C++ releases.

Why isn't ultra super duper waste a lot of bytes multi-precision numbers not a part of C++? Not every app needs a billion digits of precision.

It's a trade-off of speed and applicability.
There's lots of things someone may think is essential that isn't built-in C++. Most of these are available through 3rd party libraries. Either boost or others. thread-safe circular buffer? 3rd party. Soundex and other similar functions? 3rd party. Json parse? 3rd party. Proper csv parser? 3rd party. etc etc etc
Last edited on
Most of these are available through 3rd party libraries. Either boost or others.


Yes, but why do so many of them still get into the C++ standard? Do we need regex, chrono, ranges... if there are good libraries there?
cause someone(s) has bothered to produce a proposal which after possibly several iterations has been voted by the C++ committee to be accepted as part of the C++ standard... Not all proposals are accepted. And for some that do it can a few years for them to be.

For a change in C++ to be made - either to the core language or to the libraries, there first needs to be a proposal. This is then presented to the C++ committee which examines it, requests changes etc and votes on it. If you look at some of the proposals that now form part of C++ (see https://en.cppreference.com/w/cpp/compiler_support and follow the links under Paper(s) then you'll see the history. Take just one. the z literal suffix. This was first proposed back in Nov 2014, was approved Nov 2019 and is now coming in C++23. It went through 8 revisions before being accepted... 5 years for approval and then missed the C++20 cut.

https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p0330r8.html
> why do so many of them still get into the C++ standard?
> Do we need regex, chrono, ranges... if there are good libraries there?

There are some drawbacks to relying on third-party libraries to provide facilities that most programmers would find useful:

It would make C++ programs less portable across implementations; for instance, for the same functionality, different libraries may be used on windows and Unix. The interfaces of these libraries would not be specified by a common international standard.

Using the standard library is seamless. Trying to install and then use a third party library is harder; for beginners, it often turns out to be a painful experience.

I agree that most of the standard library is not essential; for instance the standard library in a freestanding implementation would be a very small subset of that in a hosted implementation. But I do like the idea that the standard library should try to have things that could make the majority of programmers more productive.
yes. My gripe is that there's not enough 'stuff' in the standard library. As the standard library is broken down into many different header files, then you only include what your program needs.

Trying to find a 3rd party lib, assessing it's suitability, installing it, testing it etc etc is not always as easy/painless as it could be - as well as being time consuming. Not to mention probably being needed to be 'signed off' for use. Would you find a 3rd party lib on say github and just install and use it??? Some libs only compile easily with a posix compiler (not VS). Some are for VS. Some come with pre-compiled .dlls, some are header only, some require implementation files to be part of a compilation etc etc etc.

There's also some bodies that won't allow anything other than 'standard' C++ to be used (other than developed in-house). This can exclude boost and almost always excludes free 3rd party from sat githib.
The discussion about what belongs in the standard library has been going on for some time.

The advantages has already been mentioned. A disadvantage is that it puts more burden on the standard committee and the standard library implementers. It also makes it very difficult to update things because they don't want to break existing code. A third-party library might be more willing to make breaking changes (for better or worse).

Bryce Adelstein Lelbach has an interesting talk on the subject:
https://youtu.be/DhOI3eBMWyo?t=331
Last edited on
> Do we need regex, chrono, ranges... if there are good libraries there?

I probably should have been a bit clearer. I meant the Boost libraries. They are cross-platform and all modern compiler support it (AFAIK). They are well-tested and as bug-free as possible.
Leaving all this things out of the C++ standard would give to compiler guys more time to implement the language features. Even now in 2022 not all compiler support fully C++20. Maybe in 4-5 years we won't be able to use C++23.

Would you find a 3rd party lib on say github and just install and use it??? Some libs only compile easily with a posix compiler (not VS).

No, actually I have failed too often to compile projects on Github with VS.
I can only speak for myself, but using vcpkg makes integrating libraries into VS as if it were a native C++ library very easy.*

Granted, not every 3rd party library is available via vcpkg, but there are over 1,500 currently available. Including Boost.

https://vcpkg.io/en/

Until I tried vcpkg I had tried and failed trying to compile a number of 3rd party libraries. I was annoyed I could never get a 3rd party library (other than Boost) to work despite the instructions before vcpkg.

vcpkg is usable not just for Windows and Visual Studio. Git and g++ is all that is needed for Linux and MacOS.

vcpkg is usable with CMake for the 3 supported platforms as well, no IDE is required.

*One install via vcpkg and both VS 2019 and VS 2022 have access to the library. For example, I installed the {fmt} library and now when adding header the VS IDE automatically can find headers in the library (and auto-complete) as if it were native C++ library headers like <iostream>. <fmt> is chock-full of sub-headers, so having the IDE show what headers are available is helpful. <fmt/core.h>, for example.

Even though Boost is available via vcpkg I still do a "manual install" to integrate the library into VS. I have a check-list to do it manually:
open VS Developer command prompt using appropriate developer command prompt
this is very crucial!!!!!!!!!

navigate to the boost root:
cd D:\Programming\boost_1_78_0 (or latest)

+------------------------------------------------------------------------------------+

For Visual Studio 2019:
bootstrap vc142
b2 --stagedir=vc142 -a

The following directory should be added to compiler include paths:
D:\Programming\boost_1_78_0 (or latest)

The following directory should be added to linker library paths:
D:\Programming\boost_1_78_0\vc142\lib (or latest)

+------------------------------------------------------------------------------------+

For Visual Studio 2022:
bootstrap vc143
b2 --stagedir=vc143 -a

The following directory should be added to compiler include paths:
D:\Programming\boost_1_78_0 (or latest)

The following directory should be added to linker library paths:
D:\Programming\boost_1_78_0\vc143\lib (or latest)

+------------------------------------------------------------------------------------+

To add Boost to the default header search:

http://www.cplusplus.com/forum/lounge/271176/#msg1169093

(I really, really, REALLY recommend doing this)

The paths statements are detailed at the end of the build process, but I included them in my notes anyway.

I was doing this Boost build procedure before I had even heard of vcpkg.

The current hosted vcpkg version of Boost isn't the latest, so I continue doing my manual method.

Supposedly it is possible to update to the latest version via vcpkg, issuing an update command.

vcpkg allows for piece-meal installation of selected Boost libraries, no need to install the entire collection.

M'ok, shoot me with a tranquilizer gun, if'n you can't tell I am super-enthusiastic about vcpkg. I like being able to USE a 3rd party library easily without all the build issues. :)
> I probably should have been a bit clearer. I meant the Boost libraries. They are cross-platform
> and all modern compiler support it (AFAIK). They are well-tested and as bug-free as possible.

Yes. But the people involved with the Boost libraries see merit in LWG's philosophy:
The standard library should provide things that are hard for non-compiler writers to implement well on their own
(eg. std::initializer_list), or things that most C++ programmers would need (eg. std::unordered_map).

Proposals for standardisation of libraries based on Boost have been pushed by the library authors; Boost itself has a stated goal of being reference implementations for later standardisation:
We aim to establish "existing practice" and provide reference implementations so that Boost libraries are suitable for eventual standardization.
Last edited on
In this day & age I just don't buy that we should even be questioning the need and desire to have larger floating point precisions. It doesn't necessarily have to be treated as a primitive type, implement it like boost or borrow it outright. I am sure there are many scientific C++ programs in chemistry/physics/astronomy needing this. As it stands and with the way it is & if the input is from an outside source, then the floats can just be useless.

I bet Java & other languages have it built-in. Anyone know how boost is doing this? Just allocating more memory & managing precisions over multiple independent memory locations or one contiguous memory location that follows the order of the values & precision?

Looks like the C++ standard does not guarantee even the HIGHER precision that is given to you...from my Beginning C++20 book:

The precision and range of values aren’t prescribed by the C++ standard, so what you get with each type depends on your compiler. And this, in turn, will depend on what kind of processor is used by your computer and the floating-point representation it uses. The standard does guarantee that type long double will provide a precision that’s no less than that of type double, and type double will provide a precision that is no less than that of type float.


Also, if a long double is said to have 18-19 digits of precision & I cannot even get 13 in this mantissa (3650000.000123), then how many digits of precision am I really getting...less than the 15-16 precision of a double? So, where is the cutoff for you guys, anything over 10 precision digits, then float points are just not reliable and just use boost? What is the cutoff?

The code to do it seems like it is already in setprecision() as it displays it nicely, we just need to store it!
Last edited on
> how boost is doing this? Just allocating more memory & managing precisions over multiple independent memory
> locations or one contiguous memory location that follows the order of the values & precision?

It stores the digits as a large integer, the exponent as an integer (typically int) and a bool sign indicator.

using rep_type = cpp_int_backend<...>;
https://github.com/boostorg/multiprecision/blob/develop/include/boost/multiprecision/cpp_bin_float.hpp#L104

1
2
3
4
 private:
   rep_type      m_data;
   exponent_type m_exponent;
   bool          m_sign;

https://github.com/boostorg/multiprecision/blob/develop/include/boost/multiprecision/cpp_bin_float.hpp#L127
I've written a lot about floating point numbers on this forum and this seems to be a common misconception.

When we say that "double can hold 15 to 16 (decimal) digits", what we're alluding to is that there are roughly 16*lg(10) ~ 53 bits in the mantissa. That means that the exact value of a double is (typically) a sum of 53 consecutive powers of two.

The decimal value
1.23E-4
Cannot be represented as such a sum. Instead, it's approximated by the nearest representable floating point value
0x1.01f31f46ed246p-13 = 0.0001230000000000000081983031474663903281907550990581512451171875
This is quantization error.

setprecision just changes how the textual representation of floating point numbers are displayed. It doesn't (can't) do anything to make stuff more precise.

See this thread, page 2 in particular - where the OP had a similar issue:
https://cplusplus.com/forum/beginner/250976/
Last edited on
Looks like the C++ standard does not guarantee even the HIGHER precision that is given to you

It's to allow the compiler to use whatever the computer hardware can support effectively. If all a computer can nativly support is 64-bit floating-point numbers then it might use it for both float and double.

In practice you can essentially count on having the precision you're seeing today for float and double (not long double), unless you use some very special/old hardware, because there is this other standard, IEEE 754, that "everyone" uses. It doesn't say float has to be implemented as "single precision" (binary32) and double as "double precision" (binary64) but that is the expectation so a compiler would have to have a very good reason not to follow this convention (compiler makers don't intentionally try to piss people off).

https://en.wikipedia.org/wiki/IEEE_754


In this day & age I just don't buy that we should even be questioning the need and desire to have larger floating point precisions.

But no matter how large floating-point type you have you will still run into problems of rounding errors and not being able to store all values exactly. For most applications the precision of double is good enough. It is also performant because it has hardware support.

In my experience the limitations of precision is most likely to become noticeable when you try to print floating-point numbers (with too many digits) or when mixing floating point numbers of widely different magnitude.


If you need things to be exactly precise, or at least predictable, then you probably don't want to use floating-point numbers at all.

An alternative is to use integers and perhaps use a smaller unit (e.g. instead of storing lengths in metres as floating-point numbers you could store the lengths in millimetres as integers).

Or you could use a fixed-point type, which would be implemented using integers internally. The standard doesn't have a fixed-point type yet but it has been proposed ( see https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p0037r6.html ).

Or maybe you could use some other multiprecision/bignum library that might be implemented in some other way.


My guess is that we will get at least some of these things in the a future version of the standard.

But there is a lot of different trade offs involved. Do you want to prioritize performance, safety, magnitude and/or precision? Do you want to be able to control the number of bits in the mantissa and the exponent? Maybe you even want to control the base? If you start mixing these it becomes quite complicated.

For most problems I think this is a "red herring". I have the impression that beginners often think they need something like this when really they don't. Instead they often just need to learn how to use floating-point numbers and/or integers effectively.
Last edited on
Mr Z wrote:
long double f2 = 1.23E-4;

Note that floating-point literals are of type double by default. To make it a long double you need to append an L to the end, otherwise you might lose precision if the value cannot be represented exactly.

1
2
3
4
5
6
7
8
9
10
11
12
#include <iostream>
#include <iomanip>

int main()
{
	long double f2  = 1.23E-4; // <-- implicit conversion from double to long double 
	long double f2L = 1.23E-4L;
	
	std::cout << std::setprecision(100);
	std::cout << f2  << '\n';
	std::cout << f2L << '\n';
}

Output (on my computer):
0.0001230000000000000081983031474663903281907550990581512451171875
0.0001230000000000000000059063607412042362643234127972391434013843536376953125

As you can see, the second number is closer to the intended value because we used a literal of type long double.
Last edited on
@Peter87, your code in Visual Studio 2022 produces a different output:
0.0001230000000000000081983031474663903281907550990581512451171875
0.0001230000000000000081983031474663903281907550990581512451171875

Maybe because the size of a long double with VS is 8 bytes, the same as a double?

https://docs.microsoft.com/en-us/cpp/c-language/type-long-double?view=msvc-170
Pages: 1... 34567... 10