warp, a fast C and C++ preprocessor – outdated project

I’m writing this down to catch the attention of some folks who are into this warp/warpdrive or those who put this thing on github.
I’ve been trying to use compile and use it. With some tweaks, I could compile it. But, the source tree and files uploaded at git master branch at https://github.com/facebook/warp is somewhat outdated, and it is missing a whole lot of information.

Although I appreciate the contribution to open source from facebook folks, it seems to me that it is not done in proper way.

  • There are plenty of hard-coded paths in Makefiles(ok, I can fix them),
  • Missing scripts  – builtin_defines.sh is missing(its tricky to take a guess, and even if I could arrive at something, I’m not sure if it is optimum)
  • Basic documentation going missing, example, the documentation says,

This will produce warp (the core program) and also the drivers warpdrive_gcc4_7_1, warpdrive_gcc4_8_1,warpdrive_clang3_2, warpdrive_clang3_4, and warpdrive_clangdev, each packaged for the respective compiler and version.

  • but, none of the Makefiles build these drivers, which makes the whole understanding unclear.
  • It is a pretty incomplete project uploaded at github and it asks for a lot of time being invested to get this tool working, which perhaps could have been easily avoided, making the contribution much more worthwhile at the open source standards, if someone from FB checked it more carefully and uploaded it to github.

I tried contacting nearly all of those who are contributors on github to this project to kind of help me by uploading that missing piece of information, and I haven’t got any reply from anyone yet. So, I’m writing this blog post hoping to grab their attention or someone who knows warp/warpdrive better already so that the missing information could be corrected and the project is more useful.

My One Cent Contribution To OpenSource

Last week was very discouraging. On a bad Friday, the running builds queue was too long, and the license constraints kept a lot of builds waiting for resources, and it was chaos. In the midst of all this, we have under-performing Linux boxes, bad NFS performance, and a g++ compiler that has just started compiling code slower than earlier.

This delayed the CI builds. Our ccache setup to speed up builds wasn’t useful because of the various factors.

Now, everything is nearly normal as we dedicated a few Linux machines for CI, and CI builds run on these dedicated machines and builds are comparably faster with ccache.

And, more than what I learnt about debugging these nasty environment issues, I learnt about people and how much they react if something you own is not not doing well. It wasn’t pleasant as everybody was under pressure. Understood.

By the way, in the processing of debugging these problems, I made a very tiny contribution to OpenSource and I’m happy about it. 🙂 I’m a big fan of OpenSource projects. They are free, and everything is “open”. There are no hidden intricacies, because, if you are capable, you can always go through the source code.

Anyways, I’m hoping that we solve the remaining problems, or get them solved. I’m glad I saved a few seconds of compilation time, and gave my cent to OpenSource.

The Difference

As I wrote in my last post, I tried solving the software build time problems by deploying ccache on two branches that have biggest build time.

I was hoping that our designer community “learns” and “uses” the easy and fast ways of building software as communicated. But, some of these guys are so ridiculously dumb that they are habitual to running one command, and they simply do not care for other build commands that are more apt. So, as my colleague said, I enforced ccache carefully to reduce build times. I am waiting to know the results during next week.

In the midst of these annoying problems, I’m often awestruck by the commitment, coordination, power of the open source projects. Being so disconnected physically, they are connected only via internet, or irc, yet, they produce wonderful products. In contrary at certain places, although people are sitting right beside each other, they do not follow the instructions given, creating whole lot of mess for themselves and others delaying the whole delivery.. I simply do not know whom to blame.. I wonder what is the difference between the two set of people..

Change in the linker behavior(binutils2.2?)

I’m currently handling the task of migrating/trying our builds on RedHat6.x So far our builds are run on RedHat 5.3/5.5 and given that we are running on relatively older version of RedHat, we are planning on migrating.

One of the issues I observed was with the way version 2.2 of gnu ld works. I’d built binutils 2.2 on redhat 6.1 machine and it looks like the way dynamic linking works has changed slightly. The following was the error I got. I don’t see this on binutils 2.1.x

/usr/bin/ld: note: 'some_reference' is defined in DSO some.so so try adding it to the linker command line

I read about the possible work-around, rather fix I’d say, at Fedora Wiki page http://fedoraproject.org/wiki/UnderstandingDSOLinkChange , and it worked just fine in our case.

The bottomline is, while generating binary/.so out of objects/shared objects, you should make sure that any dynamic linked library that resolves the references to the symbols in these objects/shared objects must also be linked dynamically even while generating binary/.so i.e no indirect linking anymore..

If my explanation doesn’t make sense, read through the wiki link given above, and you will be out of confusion.

Deleting Directories from Subversion Increases Size of Repository

Nothing great about this post, but during one of the training sessions a few days ago, a colleague was concerned about the number of branches we keep in our Subversion repository, and he said why can’t we just delete those unused branches/done-with-it branches so as to reduce the space utilization.

I told him that deleting these branches wouldn’t reduce the the space used by this repo on the server, instead, it actually increases. He’d a sarcastic sigh on his face! I told him, it is only the client that is benefited if number of branches(Or directories) to show-up in the Repo Browser  is less. I guess, it takes less time if there are 10 branches/directories to show, than  when there are 100.

Given that nothing added to version control(at least SVN) can ever be removed by a normal user(well, if you really want to remove some file/dir, there are ways to do it. Resort svndump, and svndumpfilter), logically, it is obvious that to stop the client from showing a particular directory that is removed, on the server, there has to some entry that says this deleted dir no longer exists in the head revision. And that some new entry definitely adds a few bytes of data.

Here is some proof:

D:\Repositories\DELETE_INCREASES_SIZE is where the data is stored physically, on my laptop(which is acting as server) for this test repo.

When a client requests to access my repo via URL https://GNQ85BS.net/svn/DELETE_INCREASES_SIZE/
the server actually reads the data from the above directory, and lets the client display it in the form of repository.

C:\Users\venkrao>svn co https://GNQ85BS.net/svn/DELETE_INCREASES_SIZE/

Checked out revision 0.


The size of the repo on the server is given below:

C:\Users\venkrao\DELETE_INCREASES_SIZE>D:\fun\Tools\du.exe D:\Repositories\DELETE_INCREASES_SIZE

Du v1.5 - report directory disk usage
Copyright (C) 2005-2013 Mark Russinovich
Sysinternals - www.sysinternals.com

Files:        27
Directories:  11
Size:         28,911 bytes
Size on disk: 1,51,552 bytes


Now, I import some data into the repository(ie. add data to repo)

C:\Users\venkrao>svn import -m "" D:\userdata\venkrao\Downloads\blogger-importer
.0.5 https://GNQ85BS.net/svn/DELETE_INCREASES_SIZE/
Adding         D:\userdata\venkrao\Downloads\blogger-importer.0.5\blogger-import
Adding         D:\userdata\venkrao\Downloads\blogger-importer.0.5\blogger-import
Adding         D:\userdata\venkrao\Downloads\blogger-importer.0.5\blogger-import
Adding         D:\userdata\venkrao\Downloads\blogger-importer.0.5\blogger-import
Adding         D:\userdata\venkrao\Downloads\blogger-importer.0.5\blogger-import
Adding         D:\userdata\venkrao\Downloads\blogger-importer.0.5\blogger-import
Adding         D:\userdata\venkrao\Downloads\blogger-importer.0.5\blogger-import
Adding         D:\userdata\venkrao\Downloads\blogger-importer.0.5\blogger-import
Adding         D:\userdata\venkrao\Downloads\blogger-importer.0.5\blogger-import

Committed revision 1.


Now, size of the dir that physically holds this data ON THE SERVER(ie. my laptop), obviously increases.

C:\Users\venkrao\DELETE_INCREASES_SIZE>D:\fun\Tools\du.exe D:\Repositories\delete_increases_size

Du v1.5 - report directory disk usage
Copyright (C) 2005-2013 Mark Russinovich
Sysinternals - www.sysinternals.com

Files:        30
Directories:  11
Size:         64,174 bytes
Size on disk: 1,96,608 bytes

C:\Users\venkrao\DELETE_INCREASES_SIZE>svn log https://GNQ85BS.net/svn/DELETE_INCREASES_SIZE/
r1 | test | 2013-04-04 22:41:27 +0530 (Thu, 04 Apr 2013) | 1 line


C:\Users\venkrao\DELETE_INCREASES_SIZE>svn ls https://GNQ85BS.net/svn/DELETE_INCREASES_SIZE/

NOW, I delete the data from the repo. See what happens to size of the dir that holds the data physically.
It actually increases. BECAUSE, it has to make *SOME* new entries(in layman terms) in the server, that tells the clients that this
data is removed from the repo in some revision, however, they are in fact present in the older revisions.

C:\Users\venkrao\DELETE_INCREASES_SIZE>svn del -m "Deleting to see if size of repo on the SERVER, increases" https://GNQ85BS.net/svn/delete_increases_

Committed revision 2.

Notice that the size of my repo dir on server has increased by 354 bytes.
C:\Users\venkrao\DELETE_INCREASES_SIZE>D:\fun\Tools\du.exe D:\Repositories\delete_increases_size

Du v1.5 - report directory disk usage
Copyright (C) 2005-2013 Mark Russinovich
Sysinternals - www.sysinternals.com

Files:        32
Directories:  11
Size:         64,528 bytes
Size on disk: 2,04,800 bytes