/tech/ - Technology and Computing

Technology, computing, and related topics (like anime)

Days left: 15


JulayWorld fallback document - SAVE LOCALLY

JulayWorld onion service: bhlnasxdkbaoxf4gtpbhavref7l2j3bwooes77hqcacxztkindztzrad.onion

Max message length: 32768

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Daily(or whatever) Programming Thread Anonymous 08/21/2019 (Wed) 14:47:20 No.8
What are you working on? Wether you just want to show off your shiny new program, or you need help with a programming issues, this is the thread for you!
sepples is such a beautiful language
I decided that defining a Sphere type and using it clarifies the intent further: > struct Sphere { Vec3 center; double radius; }; >usage in ray_color() Color ray_color(const Ray& r) { Sphere s{Vec3{0, 0, -1}, 0.5}; auto t{hit_sphere(r, s)}; // The ray parameter t >usage in hit_sphere() double hit_sphere(const Ray& r, const Sphere& s) { Vec3 oc{r.origin() - s.center}; auto a{dot(r.dir(), r.dir())}; auto b{2.0 * dot(oc, r.dir())}; auto c{dot(oc, oc) - (s.radius * s.radius)}; >>2310 I agree. I can take some getting used to if you come from C as I did, and it's even morejust as prone to the same type of abuse. But it can be used correctly and it's a pleasure to work with when it is.
Open file (5.22 KB 200x100 image.png)
Welp, got more work done today. During the next part of the code design, he abstracts the notion of 'Hittable' so that the raycast hit function can apply to any arbitrary things and not just be hard-coded into the function. He creates a 'Hittable' abstract class, then a 'Hittable_list' that encapsulates a std::vector of Hittable's. He apparently agreed with me that abstracting the idea of a Sphere itself was a good idea (also derived from Hittable, ofc), and created a couple of constants and functions in a Utility file. Finally he ties everything back together in the original main file, adding a couple of spheres into the world, and it both simplifies the code overall and to my way of thinking pretty much makes the entire thing easier to reason about. Here's the fruits of all my toil today haha. :^) > I'll add the code in the next post. Have a good one /tech/.
>Utility.hpp #pragma once // headers #include <cmath> #include <cstdlib> #include <iostream> #include <limits> #include <memory> #include <vector> #include "Ray.hpp" #include "Vec3.hpp" // usings using std::cerr; using std::cout; using std::make_shared; using std::shared_ptr; using std::vector; // constants const double inf = std::numeric_limits<double>::infinity(); const double pi = 3.1415926535897932385; // functions inline double deg_to_rad(const double deg) { return (deg * pi) / 180.0; } inline double ffmin(const double a, const double b) { return a <= b ? a : b; } inline double ffmax(const double a, const double b) { return a >= b ? a : b; } >Hittable.hpp #pragma once #include "Utility.hpp" struct Hit_rec { double t; Vec3 p; Vec3 normal; bool front_face; // we're using the geo approach and just having normals always face outwards inline void set_normal(const Ray& r, const Vec3& out_normal) { front_face = dot(r.dir(), out_normal) < 0; // is normal in same dir as ray? normal = front_face ? out_normal : -out_normal; } }; /**----------------------------------------------------------------------------- "...a very clean solution is the make an “abstract class” for anything a ray might hit and make both a sphere and a list of spheres just something you can hit. What that class should be called is something of a quandary — calling it an “object” would be good if not for “object oriented” programming. “Surface” is often used, with the weakness being maybe we will want volumes. “hittable” emphasizes the member function that unites them. I don’t love any of these but I will go with “hittable”. */ class Hittable { public: virtual bool hit(const Ray& r, const double t_min, const double t_max, Hit_rec& rec) const = 0; }; >Hittable_list.hpp #pragma once #include "Hittable.hpp" #include "Utility.hpp" /**----------------------------------------------------------------------------- */ class Hittable_list : public Hittable { public: Hittable_list() {} Hittable_list(shared_ptr<Hittable> obj) { add(obj); } // funcs void clear() { objs.clear(); } void add(shared_ptr<Hittable> obj) { objs.push_back(obj); } virtual bool hit(const Ray& r, const double t_min, const double t_max, Hit_rec& rec) const override; // fields vector<shared_ptr<Hittable>> objs; }; /**----------------------------------------------------------------------------- */ bool Hittable_list::hit(const Ray& r, const double t_min, const double t_max, Hit_rec& rec) const { Hit_rec tmp_rec{}; bool have_hit{false}; auto closest{t_max}; // we only need to be concerned w/ closest normal for (const auto& obj : objs) { if (obj->hit(r, t_min, closest, tmp_rec)) { have_hit = true; closest = tmp_rec.t; // set during obj.hit() rec = tmp_rec; } } return have_hit; } >Sphere.hpp #pragma once #include "Hittable.hpp" /**----------------------------------------------------------------------------- */ class Sphere : public Hittable { public: Sphere() {} Sphere(const Vec3& center, const double radius) : center_{center} , radius_{radius} {} virtual bool hit(const Ray& r, const double t_min, const double t_max, Hit_rec& rec) const override; bool set_normal(const Ray& r, Hit_rec& rec, const double tmp) const; // fields Vec3 center_; double radius_; }; /**----------------------------------------------------------------------------- */ bool Sphere::hit(const Ray& r, const double t_min, const double t_max, Hit_rec& rec) const { Vec3 oc{r.origin() - center_}; auto a{r.dir().len_squared()}; auto half_b{dot(oc, r.dir())}; auto c{oc.len_squared() - (radius_ * radius_)}; auto dscrmnt{(half_b * half_b) - (a * c)}; if (dscrmnt > 0.0) { auto root{sqrt(dscrmnt)}; auto tmp{(-half_b - root) / a}; if (tmp > t_min && tmp < t_max) return set_normal(r, rec, tmp); // let's try it from the other side then tmp = (-half_b + root) / a; if (tmp > t_min && tmp < t_max) return set_normal(r, rec, tmp); } return false; } /**----------------------------------------------------------------------------- */ bool Sphere::set_normal(const Ray& r, Hit_rec& rec, const double tmp) const { rec.t = tmp; rec.p = r.at(rec.t); auto out_normal{(rec.p - center_) / radius_}; rec.set_normal(r, out_normal); return true; }
I'll go ahead and repost ray1.cpp as well, since it's changed a fair bit now (for the better imo). > // filename: ray1.cpp // -this file is babby's first raycaster // usage: ./build/ray1 > image.ppm // // sauce: // https://raytracing.github.io/books/RayTracingInOneWeekend.html // #rays,asimplecamera,andbackground #include "Hittable_list.hpp" #include "Sphere.hpp" #include "Utility.hpp" /**----------------------------------------------------------------------------- */ Color ray_color(const Ray& r, const Hittable& world) { // calc any raycast hits in the world Hit_rec rec{}; if (world.hit(r, 0.0, inf, rec)) return 0.5 * (rec.normal + Color{1.0, 1.0, 1.0}); // otherwise, do background gradient... /* The ray_color(ray) function linearly blends white and blue depending on the height of the y coordinate after scaling the ray direction to unit length (so −1.0 < y < 1.0). Because we're looking at the y height after normalizing the vector, you'll notice a horizontal gradient to the color in addition to the vertical gradient. */ auto unit_dir{unit_vec(r.dir())}; auto t{0.5 * (unit_dir.y() + 1.0)}; return ((1.0 - t) * Color{1.0, 1.0, 1.0}) + (t * Color{0.5, 0.7, 1.0}); } /**----------------------------------------------------------------------------- */ int main() { const int w{200}, h{100}; cout << "P3\n" << w << " " << h << "\n255\n"; Vec3 lwr_lt{-2.0, -1.0, -1.0}; Vec3 horz{4.0, 0.0, 0.0}; Vec3 vtcl{0.0, 2.0, 0.0}; Vec3 origin{0.0, 0.0, 0.0}; // cam origin // add a couple of spheres to the world Hittable_list world; world.add(make_shared<Sphere>(Vec3{0.0, 0.0, -1.0}, 0.5)); world.add(make_shared<Sphere>(Vec3{0.0, -100.5, -1.0}, 100.0)); // big one! for (int i{h - 1}; i >= 0; --i) { cerr << "\rScanlines remaining: " << i << ' ' << std::flush; for (int j{0}; j < w; ++j) { auto u{double(j) / w}; // [0 - 1) auto v{double(i) / h}; // " // 4 Vec3's are used here to construct the ray: // 1 standalone, then 3 added together after two multiplied by uv doubles // r = (Vec3, (Vec3 + Vec3 + Vec3)) Ray r{origin, lwr_lt + (horz * u) + (vtcl * v)}; Color pix{ray_color(r, world)}; pix.wrt_colr(cout); } } cerr << "\nDone.\n"; }
Added a Camera class and antialiasing today. > This significantly increased the run time since every pixel is now having 100 random rays cast into it (or is it 10'000?). I imagine there's a way to only perform the antialiasing along the edges of an object, but I'm not sure how to just yet. Anyway, he added a Camera class and moved the frame's settings into it's default ctor. He added random number generator which I tweaked to give a more random distribution between 0.0 and < 1.0 . The main() function now has a new Camera instance, and an interior loops that creates and sums the random rays for each pixel. ray_color() is unchanged.
>Camera.hpp #pragma once #include "Utility.hpp" class Camera { public: Camera() : origin{0.0, 0.0, 0.0} , lwr_lt{-2.0, -1.0, -1.0} , horz{4.0, 0.0, 0.0} , vtcl{0.0, 2.0, 0.0} {} // funcs inline Ray get_ray(const double u, const double v) const { return Ray{origin, lwr_lt + (u * horz) + (v * vtcl) - origin}; } // fields Vec3 origin; Vec3 lwr_lt; Vec3 horz; Vec3 vtcl; }; >Vec3.hpp snippet void wrt_colr(std::ostream& out, const int samps_per_pix) { auto scale{1.0 / samps_per_pix}; auto r{scale * e[0]}; auto g{scale * e[1]}; auto b{scale * e[2]}; // write out the normalized [0,255] value of each color component out << static_cast<int>(256 * std::clamp(r, 0.0, 0.999)) << ' ' << static_cast<int>(256 * std::clamp(g, 0.0, 0.999)) << ' ' << static_cast<int>(256 * std::clamp(b, 0.0, 0.999)) << '\n'; } >Utility.hpp snippets #include <functional> #include <random> using std::random_device; using unfrm_real_dstrb = std::uniform_real_distribution<double>; using dbl_func = std::function<double()>; // 0 ≤ r < 1 inline double rnd_double() { static unfrm_real_dstrb dstrb_0_1{0.0, 1.0}; static std::mt19937 rd_gen{random_device{}()}; static dbl_func rand_generator{bind(dstrb_0_1, rd_gen)}; return rand_generator(); }
>ray1.cpp snippet int main() { const int w{200}, h{100}; const int samps_per_pix{100}; cout << "P3\n" << w << " " << h << "\n255\n"; // add a couple of spheres into the world Hittable_list world{}; world.add(make_shared<Sphere>(Vec3{0.0, 0.0, -1.0}, 0.5)); world.add(make_shared<Sphere>(Vec3{0.0, -100.5, -1.0}, 100.0)); // big one! Camera cam{}; for (int i{h - 1}; i >= 0; --i) { cerr << "\rScanlines remaining: " << i << ' ' << std::flush; for (int j{0}; j < w; ++j) { Color pix{}; for (int s{0}; s < samps_per_pix; ++s) { auto u{(j + rnd_double()) / w}; auto v{(i + rnd_double()) / h}; Ray r{cam.get_ray(u, v)}; pix += ray_color(r, world); } pix.wrt_colr(cout, samps_per_pix); } } cerr << "\nDone.\n"; }
>>2354 forgot to add the std::bind include to the 1st Utilty.hpp snippet. > using std::bind;
>>2357 Actually, check that. It's 100, just like the variable names says. There are 200 random values between [0 - 1) for each pixel. (100 u, 100 v)
>>2358 Whuups! Time to change my diaper!
Open file (11.33 KB 200x100 image.png)
Began adding materials, starting with diffuse Lambert. > He added three variations of scattering functions for the rays, and made ray_color() recursive now, with a bounce-limit parameter. He also added a gamma-correction of 2 to the wrt_colr() func in Vec3.
>ray1.cpp snippets Color ray_color(const Ray& r, const Hittable& world, const int depth) { // calc any raycast hits in the world if (depth <= 0) return Vec3{0.0, 0.0, 0.0}; // at bounce limit, gather stops // the recursion will stop once we fail to hit anything Hit_rec rec{}; if (world.hit(r, 0.001, inf, rec)) { // epsilon used to rm shadow-acne // try these variations // Vec3 targ{rec.p + rec.normal + rnd_unit_vec()}; // Vec3 targ{rec.p + rec.normal + rnd_in_unit_sphere()}; Vec3 targ{rec.p + rec.normal + rnd_in_hemi(rec.normal)}; // recursive return 0.5 * ray_color(Ray{rec.p, targ - rec.p}, world, depth - 1); } int main() { const int w{200}, h{100}; const int samps_per_pix{100}; const int max_bounce{50}; ... pix += ray_color(r, world, max_bounce); >Vec3.hpp snippets void wrt_colr(std::ostream& out, const int samps_per_pix) { auto scale{1.0 / samps_per_pix}; // add gamma-correction 2.0 auto r{sqrt(scale * e[0])}; auto g{sqrt(scale * e[1])}; auto b{sqrt(scale * e[2])}; // unit length Vec3 rnd_unit_vec() { auto a{rnd_double(0.0, pi)}; auto z{rnd_double(-1.0, 1.0)}; auto r{sqrt(1.0 - (z * z))}; return Vec3{r * cos(a), r * sin(a), z}; } // random length [0 - 1) Vec3 rnd_in_unit_sphere() { while (true) { auto p{Vec3::random(-1.0, 1.0)}; if (p.len_squared() >= 1.0) // this effectively limits the 2.0 range continue; // [-1 - 1) above down to just unit length return p; } } // half volume Vec3 rnd_in_hemi(const Vec3& normal) { Vec3 in_unit_sphere{rnd_in_unit_sphere()}; if (dot(in_unit_sphere, normal) > 0.0) return in_unit_sphere; else return -in_unit_sphere; } >Utility.hpp snippet inline double rnd_double(const double min, const double max) { static unfrm_real_dstrb dstrb_0_1{min, max}; static std::mt19937 rd_gen{random_device{}()}; static dbl_func rand_generator{bind(dstrb_0_1, rd_gen)}; return rand_generator(); } inline double rnd_double() { return rnd_double(0.0, 1.0); // 0 ≤ r < 1 } I think that'll do for today, I'll plan to start chapter 9 tomorrow. Cheers.
>>2361 Just realized my random double function's var name needed adjusting after the params change. inline double rnd_double(const double min, const double max) { static unfrm_real_dstrb dstrb_min_max{min, max}; static std::mt19937 rd_gen{random_device{}()}; static dbl_func rand_generator{bind(dstrb_min_max, rd_gen)}; return rand_generator(); }
>>2362 >static unfrm_real_dstrb dstrb_min_max{min, max}; Won't this cause only the first function call to set min/max? Wouldn't the function use the values from the first function call afterwards due to "static"? C++ doesn't have parameter-based memoization AFAIK.
>>2363 easy enough to test anon, judge for yourself. #include <functional> #include <iostream> #include <random> using std::random_device; using unfrm_real_dstrb = std::uniform_real_distribution<double>; using dbl_func = std::function<double()>; inline double rnd_double(const double min, const double max) { static unfrm_real_dstrb dstrb_min_max{min, max}; static std::mt19937 rd_gen{random_device{}()}; static dbl_func rand_generator{bind(dstrb_min_max, rd_gen)}; return rand_generator(); } int main() { for (unsigned i{0}; i < 10; ++i) { std::cout << rnd_double(0.0, 1.0) << ' '; } std::cout << "\n\n"; } output example: 0.165246 0.4991 0.579882 0.213314 0.603315 0.565854 0.268919 0.708031 0.586541 0.368085
>>2363 my first experiments seem to prove your point given my simple test harness. near as i can tell however, this code worked correctly (at least a first glance) // unit length Vec3 rnd_unit_vec() { auto a{rnd_double(0.0, pi)}; auto z{rnd_double(-1.0, 1.0)}; auto r{sqrt(1.0 - (z * z))}; return Vec3{r * cos(a), r * sin(a), z}; } thanks for the query anon, i'll look into it further tomorrow.
>>2364 this is the loop i should have written here: for (unsigned i{0}; i < 10; ++i) { std::cout << rnd_double(0.0, double(i)) << ' '; }
Open file (13.88 KB 200x100 cpp_image_mins.png)
Open file (13.84 KB 200x100 c_image_secs.png)
>>2363 OK, I've both confirmed your point, and also discovered why that approach was taken, most likely. I thought it odd that the shadows seemed to have an angle. Sure enough seemingly it's because the min/max was locked in on the first use. Fixing it by removing the statics fixed it, but dramatically increased the rendering time as well. I tweaked it for slightly better performance but in the end switched over to a C version. >Utiltiy.hpp #pragma once #include <algorithm> #include <cmath> #include <cstdlib> #include <iostream> #include <limits> #include <memory> #include <vector> // usings using std::cerr; using std::cout; using std::make_shared; using std::shared_ptr; using std::vector; // constants const double inf{std::numeric_limits<double>::infinity()}; const double pi{3.1415926535897932385}; // functions inline double rnd_double() { // Returns a random real in [0,1). return rand() / (RAND_MAX + 1.0); } inline double rnd_double(double min, double max) { // Returns a random real in [min,max). return min + (max - min) * rnd_double(); } inline double deg_to_rad(const double deg) { return (deg * pi) / 180.0; } inline double ffmin(const double a, const double b) { return a <= b ? a : b; } inline double ffmax(const double a, const double b) { return a >= b ? a : b; } // headers (include here at the end) #include "Ray.hpp" #include "Vec3.hpp" This is now back to rendering in just seconds. I think the C++ version is a tiny bit better quality (to my admittedly untrained eye) but the increase in rendering time simply isn't worth it. The C version is plenty good enough. > He implied as much in the text of the book that it wasn't an optimal solution on the standard C++ library's part in the case of their random number generator. There are few areas where C is still notably superior to C++ (I've coded in both professionally for years), but cleary the random number generator system is an area that is still inferior in the C++ system.
>>2371 > but cleary the random number generator system is an area that is still inferior in the C++ system. I guess that was a bit hasty. In all fairness to the C++ set of random number libraries, they are a) far more flexible & extensive, b) able to easily be tweaked in algorithms by simply swapping in differing engines & distributions, etc. c) preferred in the sciences & maths in general over the C version for just these reasons, and finally far more extensively developed into new alternatives version after version of the standard. The slower runtime is probably a direct artifact of the much more careful attention to accuracy in the case of these engines. linear_congruential_engine minstd_rand0(C++11) minstd_rand(C++11) mersenne_twister_engine mt19937(C++11) mt19937_64(C++11) subtract_with_carry_engine ranlux24_base(C++11) ranlux48_base(C++11) discard_block_engine ranlux24(C++11) ranlux48(C++11) independent_bits_engine shuffle_order_engine knuth_b(C++11) non-deterministic random number generator using hardware entropy source Uniform distributions uniform_real_distribution Bernoulli distributions bernoulli_distribution uniform_int_distribution binomial_distribution negative_binomial_distribution geometric_distribution Poisson distributions poisson_distribution exponential_distribution gamma_distribution weibull_distribution extreme_value_distribution Normal distributions normal_distribution lognormal_distribution chi_squared_distribution cauchy_distribution fisher_f_distribution student_t_distribution Sampling distributions discrete_distribution piecewise_constant_distribution piecewise_linear_distribution Utilities generate_canonical seed_seq > https://en.cppreference.com/w/cpp/header/random With C, you get rand() love it or leave it. In much of engineering the C version may suffice and certainly in the case of a raytracer. But for actual accuracy in the maths & sciences, the C++ version is far superior.
>>2353 BTW, I discovered I inadvertently had smoothing during zoom turned on in my image viewer for the earlier cap. Here's an actual-accurate rendition of what the anti-aliasing is doing in the algorithm.
>>2373 >and finally [d)] far more extensively developed into new alternatives version after version of the standard. I guess I should have clarified that it's the entire sweep of the numerics libraries that are constantly being updated each iteration of the standard, since that's really what I meant. https://en.cppreference.com/w/cpp/numeric C++ has firmly established itself as an important software framework for scientific investigation over the last decade, and underlies much of the advances in scripting frameworks in languages like Python & R, for example TensorFlow.
Open file (19.74 KB 200x100 image.png)
OK, began working on metals and reflections now. He created a Materials abstract class that has a scatter() function. Then two classes that inherit from it; a Lambertian class that moves the previous behavior from the ray_color() function in, and also a Metal class that has reflectivity. The hit record struct now has a Material shared pointer field now, and a couple of new metal spheres were added into the scene. > It's apparent the Camera field settings could use some tweaking, but the system has real metal reflections now so that feels pretty neat tbh.
>Material.hpp #pragma once #include "Hittable.hpp" #include "Utility.hpp" /**----------------------------------------------------------------------------- */ class Material { public: virtual bool scatter(const Ray& ray_in, const Hit_rec& rec, Vec3& attenuation, Ray& scattered) const = 0; }; /**----------------------------------------------------------------------------- */ class Lambertian : public Material { public: Lambertian(const Vec3& albedo) : albedo_{albedo} {} virtual bool scatter(const Ray& /*ray_in*/, const Hit_rec& rec, Vec3& attenuation, Ray& scattered) const override { // Vec3 scatter_dir{rec.normal + rnd_unit_vec()}; // Vec3 scatter_dir{rec.normal + rnd_in_unit_sphere()}; Vec3 scatter_dir{rec.normal + rnd_in_hemi(rec.normal)}; scattered = Ray{rec.p, scatter_dir}; attenuation = albedo_; return true; } // fields Vec3 albedo_; }; /**----------------------------------------------------------------------------- */ class Metal : public Material { public: Metal(const Vec3& albedo) : albedo_{albedo} {} virtual bool scatter(const Ray& ray_in, const Hit_rec& rec, Vec3& attenuation, Ray& scattered) const override { Vec3 reflected{reflect(unit_vec(ray_in.dir()), rec.normal)}; scattered = Ray{rec.p, reflected}; attenuation = albedo_; return dot(scattered.dir(), rec.normal) > 0.0; } // fields Vec3 albedo_; }; >Hittable.hpp snippet class Material; struct Hit_rec { Vec3 p; Vec3 normal; bool front_face; double t; shared_ptr<Material> mat_p; // we're using the geo approach and just having normals always face outwards inline void set_normal(const Ray& r, const Vec3& out_normal) { front_face = dot(r.dir(), out_normal) < 0; // is normal in same dir as ray? normal = front_face ? out_normal : -out_normal; } }; >Sphere.hpp snippet /**----------------------------------------------------------------------------- */ class Sphere : public Hittable { public: Sphere() {} Sphere(const Vec3& center, const double radius, shared_ptr<Material> mat_p) : center_{center} , radius_{radius} , mat_p_{mat_p} {} virtual bool hit(const Ray& r, const double t_min, const double t_max, Hit_rec& rec) const override; bool set_normal(const Ray& r, Hit_rec& rec, const double tmp) const; // fields Vec3 center_; double radius_; shared_ptr<Material> mat_p_; }; >Vec3.hpp snippet inline Vec3 reflect(const Vec3& v, const Vec3& n) { return v - ((2 * dot(v, n)) * n); }
ray_color() is still recursive, but is simpler now. the objects need to be constructed with a material pointer now. >ray1.cpp // filename: ray1.cpp // -this file is babby's first raycaster // usage: ./build/ray1 > image.ppm // // sauce: // https://raytracing.github.io/books/RayTracingInOneWeekend.html #include "Camera.hpp" #include "Hittable_list.hpp" #include "Material.hpp" #include "Sphere.hpp" /**----------------------------------------------------------------------------- */ Color ray_color(const Ray& r, const Hittable& world, const int depth) { // calc any raycast hits in the world if (depth <= 0) return Vec3{0.0, 0.0, 0.0}; // at bounce limit, gather stops // the recursion will stop once we fail to hit anything Hit_rec rec{}; if (world.hit(r, 0.001, inf, rec)) { // epsilon used to rm shadow-acne Ray scattered{}; Vec3 attenuation{}; // recursive if (rec.mat_p->scatter(r, rec, attenuation, scattered)) return attenuation * ray_color(scattered, world, depth - 1); return Vec3{0.0, 0.0, 0.0}; } // otherwise, do background gradient... auto unit_dir{unit_vec(r.dir())}; auto t{0.5 * (unit_dir.y() + 1.0)}; return ((1.0 - t) * Color{1.0, 1.0, 1.0}) + (t * Color{0.5, 0.7, 1.0}); } /**----------------------------------------------------------------------------- */ int main() { const int w{200}, h{100}; const int samps_per_pix{100}; const int max_bounce{50}; cout << "P3\n" << w << " " << h << "\n255\n"; // add some spheres into the world Hittable_list world{}; world.add(make_shared<Sphere>(Vec3{0, 0, -1}, 0.5, make_shared<Lambertian>(Color{0.5, 0.7, 1.0}))); world.add(make_shared<Sphere>(Vec3{0, -100.5, -1}, 100, make_shared<Lambertian>(Color{0.8, 0.8, 0.8}))); world.add(make_shared<Sphere>(Vec3{1, 0, -1}, 0.5, make_shared<Metal>(Color{0.8, 0.6, 0.2}))); world.add(make_shared<Sphere>(Vec3{-1, 0, -1}, 0.5, make_shared<Metal>(Color{0.7, 0.5, 0.8}))); // Camera cam{}; for (int i{h - 1}; i >= 0; --i) { cerr << "\rScanlines remaining: " << i << ' ' << std::flush; for (int j{0}; j < w; ++j) { Color pix{}; for (int s{0}; s < samps_per_pix; ++s) { auto u{(j + rnd_double()) / w}; auto v{(i + rnd_double()) / h}; Ray r{cam.get_ray(u, v)}; pix += ray_color(r, world, max_bounce); } pix.wrt_colr(cout, samps_per_pix); } } cerr << "\nDone.\n"; }
>>2452 He added a 'fuzz' parameter to the Metal ctor that can diffuse the reflectivity making it appear somewhat like cast or brushed metal. This example uses a ~1/3 setting. > >Material.hpp snippet /**----------------------------------------------------------------------------- */ class Metal : public Material { public: Metal(const Color& albedo, double fuzz = 0.0) : albedo_{albedo} , fuzz_{fuzz < 1.0 ? fuzz : 1.0} {} // virtual bool scatter(const Ray& ray_in, const Hit_rec& rec, Color& attenuation, Ray& scattered) const override { Vec3 reflected{reflect(unit_vec(ray_in.dir()), rec.normal)}; scattered = Ray{rec.p, reflected + (fuzz_ * rnd_in_unit_sphere())}; attenuation = albedo_; return dot(scattered.dir(), rec.normal) > 0.0; } // Color albedo_; double fuzz_; }; >ray1.cpp snippet /**----------------------------------------------------------------------------- */ void add_objs_to(Hittable_list& world) { // add some spheres into the world // Lamberts world.add(make_shared<Sphere>(Vec3{0.0, 0.0, -1.0}, 0.5, make_shared<Lambertian>(Color{0.5, 0.7, 1.0}))); world.add(make_shared<Sphere>(Vec3{0.0, -100.5, -1.0}, 100.0, make_shared<Lambertian>(Color{0.8, 0.8, 0.8}))); // Metals world.add(make_shared<Sphere>(Vec3{1.0, 0.0, -1.0}, 0.5, make_shared<Metal>(Color{0.8, 0.6, 0.2}, 0.3))); world.add(make_shared<Sphere>(Vec3{-1.0, 0.0, -1.0}, 0.5, make_shared<Metal>(Color{0.7, 0.5, 0.8}))); } /**----------------------------------------------------------------------------- */ int main() { const int w{200}, h{100}; const int samps_per_pix{100}; const int max_bounce{50}; cout << "P3\n" << w << " " << h << "\n255\n"; Hittable_list world{}; add_objs_to(world); Camera cam{}; ...
Open file (20.67 KB 200x100 image.png)
>>2456 forgot example pic
Hi gommie! :^)
Open file (50.44 KB 604x551 1427818685578.jpg)
>>2476 Hello Robot.
>finally figured out what pointer arithmetic is useful for
>>2959 Seems to me there are numerous uses for pointer arithmetic (and an even larger set of potential abuses of them). Mind spelling out what you realized Anon?
>>2959 >>2960 >useful for what >\0 What mean?
>>2961 >What mean? > start here for the answers to these and other deep mysteries anon.
Open file (89.68 KB 786x1017 DAY OF THE SEAL.jpg)
>What are you working on? DAY OF THE SEAL
>>2963 Silly Anon, DAY OF THE POLAR BEAR is coming soon tbh.
>>2962 Don't be mean. Every beginner need pointers. >>2961 >>0 Here you go.
>>2968 > Every beginner need pointers. < badumtiss >>0 < le ebin rusemaster
>>2970 In most implementation, NULL == 0
>>2974 yes, but macros are evil tbh. use nullptr if you need a NULL. otherwise just use an int, but definitely don't treat pointers like integers.
>>2978 >don't treat pointers like integers heh, https://godbolt.org/z/LcpGg-
Open file (46.78 KB 610x402 Top.Men.jpeg)
>>2979 >uses compiler explorer < mein dude >top-quality code < pic related I'm conflicted here, Anon.
Open file (346.74 KB 751x1100 1561259958667.png)
>What are you working on? A program that slightly "simulates" various models of soviet built reaktor and turbine models, that uses fugtorio power calculation formula because I'm a fucking math pleb. Here is the picture code: https://0bin.net/paste/DrsKKzyJH7upJPvK#2pjTt1aQGNyVijAUt6ZS8pRHKBecYNnV31WD2gTxaQ4 when the code is more mature I will upload it on gitgud instead, currently I'm trying to hammer down the functionalities of the reaktors and turbines which are working in my opinion a little bit sub-optimal. The main issue is mostly that I can't get a way to work where there is diminishing returns when there is too many turbines running for example. Later I will try to write a parser so that the list of available models are saved in a text file so when I write the mainloop again the player can choose what types he wants to use in a structure/block.
>>2982 >The main issue is mostly that I can't get a way to work where there is diminishing returns when there is too many turbines running for example. I don't really into Python, but algorithmically-speaking, could you just sort of kludge a 'dim_returns' inverse power factor that would increase as the count of reactors went up? Regardless, looks interesting Anon.
>>2984 >I don't really into Python, but algorithmically-speaking, could you just sort of kludge a 'dim_returns' inverse power factor that would increase as the count of reactors went up? I just tinkered around with it now and I think the results of it should be sufficient for now, eh I think the code could be made a bit more compact asides from that it seems to be doing what I want, so what the code does is that it increase the power value with its initial value and for each run the divider and turbine count is incremented by 1, once the turbine count hits a threshold the divider kicks and the power increase is reduced. I could also make it so that there is a divider limit so that it doesn't reduce the power too much. Hmm, it does make me a bit curios if it will have a different behavior when I bother adding a water/steam flowrate object, I guess I will see it later once the reaktor/turbine object is more complete. """ Excepted pattern: 0 - 10 | + 0 1 - 20 | + 10 2 - 30 | + 10 -Start of diminishing return. 3 - 35 | + 5 4 - 38 | + 3 5 - 40.5 | + 2.5 """ init = 10 power = 0 div = 1 count_t = 0 def start(): global power, init, div, count_t calc = 0 if count_t <= 3: calc += init power += calc elif count_t > 3: calc = init / div power += calc print("Add:", round(calc, 2), "Power:", round(power, 2)) for i in range(10): div += 0.25 count_t += 1 start() >Regardless, looks interesting Anon. Thanks.
>>2985 Yea that seems like it would work ok.
>>2981 > >top-quality code but it is! it gives the right output in the console! it's basically 5 demos of pointer abuse crammed into one file: 1. abusing local variable addresses to find out call depth 2. abusing function addresses to calculate other function addresses 3. abusing stack-allocated null-terminated strings to build a longer string 4. abusing the fact that array[i] is the same as i[array] and *(array+i). though, is it really abuse when the compiler specifically allows these notations as long as exactly 1 identifier is a pointer? also using the most horrid syntax to increment a value to the left: ++*--p 5. abusing zero-length arrays just for type info and abusing nested VLAs to perform multiplication. it's not really pointer abuse, but it's related because it takes a pointer difference between the start and end of a VLA for a calculation not included is demo 6: changing a function pointer with arguments A to a function pointer taking arguments B and calling it. also not included is demo 7: fun with pointers in printf https://github.com/carlini/printf-tac-toe
>>2985 if you want a simple formula for diminishing returns with a given maximum, it's: power = max_power*turbine_count/(turbine_count-1+max_power/power_per_turbine) (you can check for yourself that with 0 turbines you get power=0, with 1 turbine it's power=power_per_turbine and with infinite turbines it's power=max_power) if you get multiple types of turbines hooked up to the same system, you can calculate an average power_per_turbine. that way it doesn't matter in which order you have your turbines or whether you have 2 normal turbines vs 1 good one and 1 awful one, but having 2 turbines with a power_per_turbine of 30 is slightly better than 3 turbines of 20.
>>2988 >you can check for yourself that with 0 turbines you get power=0 Nope, when self.power_per_turbine is 0 and/or self.turbine_count is 0 I get "ZeroDivisionError: division by zero". >with 1 turbine it's power=power_per_turbine Yes as excepted >and with infinite turbines it's power=max_power How are infinite turbines represented programmatically wise? I'm not sure if Python is capable of calling "Inf" amount of objects. Current script: https://0bin.net/paste/iTHqnMlJE3rUoZ5S#HgHcQ9Uknt7FmBGlzsS2BuUgBS9m6EMyYhlHV2ROPgg With self.max_power = 100000000 Results (Without a turbine_count condition): Power: 4,850,000.0 --- Turbine: 1 Power: 9,251,311.397234144 --- Turbine: 2 Power: 13,263,445.761166818 --- Turbine: 3 Power: 16,935,835.87952859 --- Turbine: 4 Results (With a turbine_count condition): Power: 4,850,000 --- Turbine: 1 Power: 9,700,000 --- Turbine: 2 Power: 13,263,445.761166818 --- Turbine: 3 Power: 16,935,835.87952859 --- Turbine: 4 This formula at the first glance doesn't seem to be bad, but it causes power output fluctuation when I increase/decrease self.max_power despite the power output is not reaching that limit yet, is that intentional? Also when I set the self.max_power to 9700000 2 turbines will produce less power than that (= 6,466,666.666666667) > but having 2 turbines with a power_per_turbine of 30 is slightly better than 3 turbines of 20. That is a good idea, I'm trying to do something similar with the resource themselves once I have made the reactor object more complete instead of constantly jumping between changing stuff of 3 objects previously. One of the odd things I think how Factorio handles fuel value is that compared with wood and coal items that there is no difference when the total fuel stack value is calculated which results in total of 200 MJ (Wood = 100 stack size, Coal = 50 stack size).
>>2989 >Nope, when self.power_per_turbine is 0 Sure, but you've got a really shitty turbine if its designed power rating is 0. :^) Indeed, in practice you need to first check if the power per turbine is valid or by proxy whether there's any turbines, otherwise the formula doesn't make sense. >it causes power output fluctuation when I increase/decrease self.max_power despite the power output is not reaching that limit yet, is that intentional? Yes, because the formula is a simple smooth line. Last time I made an interactive graph to shed some light on the formula, but geogebra requires registration to share it so I left it at that. So here's the setup in desmos: https://www.desmos.com/calculator/v4l49zrkst If you make the power per turbine very small, like p=100 000, you'll see that the effective power grows nearly linearly with more reactors and barely changes when you slide the max power around. But the larger the power per turbine is, the more you'll see the total output fluctuate with a change in the max power, even with only 2 turbines. And the more you'll see that extra turbines have diminishing returns.
I'm just beginning a book called C++ Crash Course by Josh Lospinoso, from No Starch Press. In the preface of the book, he addresses benefits from C++ that C-only developers can take advantage of if they compile their C code with a C++ compiler. Here's an approach that allows C code to use C++'s RAII idiom, and it won't leak the file handle in case something external to the code breaks (like the disk space is exhausted for example). // demo of a SuperC file class, pp55-56 #include <cstdio> #include <cstring> #include <system_error> struct File { File(const char* path, bool write) { auto file_mode = write ? "w" : "r"; file_ptr = fopen(path, file_mode); if (! file_ptr) throw std::system_error{errno, std::system_category()}; } ~File() { fclose(file_ptr); } FILE* file_ptr; }; int main() { { File file{"last_message.txt", true}; const auto msg{"We apologize for the inconvenience."}; fwrite(msg, strlen(msg), 1, file.file_ptr); } // last_message.txt is closed here! { File file{"last_message.txt", false}; char read_msg[37]{}; fread(read_msg, sizeof(read_msg), 1, file.file_ptr); printf("Read last message: %s\n", read_msg); } }

Report/Delete/Moderation Forms
Delete
Report

Captcha (required for reports and bans by board staff)

no cookies?