Recently I was testing a function that generated slugs for an application. In order to make the slugs unique we would append the microtime to the slug if needed. After updating my data provider to account for using the microtime version of the slug I was receiving intermittent success for the test cases. Using microtime for generated data introduces a margin of error into the time between when the code is executed and when you compare the results.
I tried using microtime before and after the function call that generated the slugs, but I would still end up around ~0.0010 seconds behind the microtime that the slug generator was using. I could not figure out how to make these tests pass 100% of the time, within reason.
If you can’t control the generator for tests that involve random data you have 2 options:
- refactor to remove the randomness
- live with a degree of certainty
For the purposes of generating a slug with appended microtime, I determined that the degree of certainty was that the slug’s microtime at the seconds level will either be equal to or at most one second before the microtime of the test cases microtime call. If it is greater than 1 second difference then there is definitely a problem. The 1 second threshold could most likely be reduced (for example to 0.0010 seconds) if desired, but I needed to get the test written in a timely manner and a 1 second degree of certainty was acceptable at the moment.
In a project for work I came across a bug in our CSS that would leave our nav bar in IE8 (possibly IE9) with a gradient that went from blue to dark blue. As part of this project we went with Twitter Bootstrap and are using Assetic and LessPHP to manage and compile the less files. I didn’t understand why we had this issue as we had the gradient, just the wrong colors. So I started digging into the less files, and then finally the compiled CSS.
I just recently had a client whose data import requirements changed mid-project. We started off with an INTEGER field for this particular value because it made the most sense. Once I got the real data from the client I found out that that there are duplicate pages (representing multiple items on a page) and some items span more than one page. My first reaction was “Great, now I have to refactor the entire table structure to handle these items.” It turns out that the cleanest solution was to change the INTEGER field to a VARCHAR field. This would allow for items to be marked as 1a, 1b or 1c indicating that the items were on the same page, and wouldn’t disrupt the client’s data (they are creating a simple CSV from other software). Perfect, that works as expected and we now have multi-items per page! Then I went to the listing page. Everything was sorted, by the unnatural sort algorithm.
The solution was to use
CAST(page AS unsigned) ASC, page ASC
in the ORDER BY. This accomplished a “natural sort” algorithm and works really well.
WTF! The /etc/cron.d/php5 should be optional and not forced upon you. This has bitten me twice, once at work, and now on my business machines. It isn’t funny, and there is really no need for it. If you are doing it “for the users”, then let the idiots have massive session caches. It is THEIR responsibility to tune the server.
FYI: you can safely delete /etc/cron.d/php5 without any issues. Don’t let stupidity on the part of package maintainers fool you.
So .text() doesn’t work for retrieving the content. The solution use .html(). The problem with this is that in IE 7/8 you end up with extra white space in the content. The solution $.trim(), and now you have your content the same as modern browsers using .text().
This stumped me long enough while using a value in jQuery’s inArray function and it wasn’t finding the entry, even though I knew that the value existed in the array. Not the most friendly for modern browsers, but maintains backwards compatibility with older version of IE.