Coin Flip Conundrum
I watched this video,
[youtube https://www.youtube.com/watch?v=IAiNqQi30-Y]
Very interesting.
So, I managed to prove it through some scripting.
https://gist.github.com/allencch/36544a84fdb8159756618290209f1750
And I get some result like this,
[code lang=text] Result: Target: [0, 1] average steps = 3.987 Target: [0, 0] average steps = 5.9505 [/code]
P/S: Wrote a robust flip coin script, which can accept the coin tossing sequence with any length. [here]
C++ Unit Test and Dependency Injection
TDD (test driven development) is widely adopted in modern development such as web development. Because it allows the developers to test the solution robustly in order to produce a more stable product.
Higher level programming languages like JavaScript and Ruby allows the developers to easily mock the functions and data to test the target specification. However, programming language like C++ is not designed for TDD. It will be more complex if you want to mock functions.
How to solve C/C++ memory leaking (in Linux)?
My hobby project Med is written in C++. A lot of implementations need to use dynamic memory allocation and instantiation. Because complex data is impractical to be passed by value, like the case of JavaScript object and array. Since C++ doesn’t have garbage collection, it is possible that the developer doesn’t free/delete the dynamically created memory properly.
As in my project Med, the program will retrieve the memory from other process. That means, it needs to make a copy of the scanned memory. And this will involve creating dynamic memory (using new operator). When using the program to filter the memory to get the target result, it needs to get a new copy of memory with the updated values, then compare with the previous copy. The unmatched will need to be discarded (free/delete); the matched will need to replace the old one (also free/delete the old one, because the new one is dynamically allocated).
Firefox Legacy version 56.0.2
The latest Firefox version 57 and above, a.k.a Firefox Quantum, it is fast, but… that is not what I need.
As a developer, I favoured Chromium more than Firefox. And I use Firefox mainly for downloading. The addon DownThemAll is the must. The greatest feature I love is the ability to highlight and download the selected hyperlinks as batch. And I can name the downloaded files by original filename or based on the text in HTML.
NVidia and hibernation issue, partially solved
In my previous post, I mentioned about NVidia and xcompmgr, it is not true reason that causes the Chrome not updating the display.
The root cause is partially found. The issue is caused by the optimus laptop (dual graphic card, NVidia with Intel). In unknown conditions, resume from hibernation will cause the Intel graphic card doesn’t work properly. This can be checked by running “glxgears” after resume. You will see the OpenGL fails to refresh on the display.
Firefox or Chromium (software development)?
I was switching from Chromium to Firefox as my primary web browser recently. Then, I switched back to Chromium again.
Chrome was usually claimed that it consumes a lot of memory. And recent Firefox updates claim that it is faster and consumes less memory. That is why, I switched to Firefox. I agree that, it is much faster than before. However…
I faced a critical issue. One less important issue that I would like to mention is, Firefox does not support Google Hangout.
NVidia and probably xcompmgr
I have a Dell Vostro 5459 with Arch Linux. Previously, whenever I do a hibernation, and resume will produce a black screen, which I can do nothing.
Then I believed that one of the NVidia updates fixed this issue.
However, very soon later, I faced another issue is, resuming from hibernation causes Chromium with freeze content, or the content doesn’t redraw. This not only happen to Chromium, but also Opera and SMPlayer. I thought it is caused by NVidia. Tried a lot of solution, search nothing from Internet. I also installed “bbswitch”, nothing solved.
Complexity and simplicity
When we are developing a solution or a system, we are prone to choose a simple solution. Because simple solution is just better than complex solution. However, most of the time, we choose a simple solution inappropriately, and this causes more troubles gradually when the system is growing.
The complexity of a solution, should depend on the complexity of the problem itself, not the other way round. For example, we cannot create an operating system with a single line of programming statement. We also cannot create an operating system with just a single source file. Because an operating system is very complex (managing devices, memory, process, etc), no simple solution can fulfil the requirements.
C++ future
Recently updating my hobby project Med, memory editor for Linux, still under heavy development with various bugs.
In this project, I use several C++1x features (compiled with C++14 standard). Most recent notable feature is multi-threading scanning. In memory scanning, scan through the accessible memory blocks sequentially is slow. Therefore, I need to scan the memory blocks in parallel. To implement this, I have to create multiple threads to scan through the memory blocks.
Academic people should git and TeX
Mr Torvalds created two amazing things: Linux and Git. Former is an OS kernel; latter is a version control system. Unluckily, none is prevailing in Malaysia.
When I was a lecturer, creating a new programme with various courses is truly exhaustive. The worst case is recording the changes of the documents for the government agency’s accreditation. If you are systematic, you will backup the files. But backing up the files does not tell you what are the changes you had made. Unless you create another note for each changes you made. But that will be double works. If you say you can use Microsoft Word’s feature to compare the documents and see the changes, it is totally impractical if the two documents are big and there have a vast changes.