The other day the COO, a co-worker, and I were talking about things happening at the Company, and a quick side trek into the value of IT certifications came up. My original stance on the subject was that certifications weren’t valuable and that the skills we end up sharpening are better. After talking with them though I came to find out that not all certifications are created equal, some have more value than others. I don’t have all the answers to what the best measures of how to find those valuable certifications, but I did think about it longer, and I believe I’ve come up with something to help us measure wither getting a certification is worth it.
Recently I had to upgrade a server from a Debian 4 to Ubuntu 14.04LTS. One of the problems I encountered was that freeradius behaved badly when I ran
service freeradius reload. Mainly, I would end up with a lot of failed binding logs, because upstart had no clue how to reload the process. It turns out that that upstart script is misconfigured.
I’ve recently been converting some VMWare Fusion VM’s to VirtualBox and part of that process I had to re-remember how to do after having done it before. This serves as my memory, and some documentation, on how this is done.
With the release of jQuery 1.9 using .toggle() as .toggle(handler(eventObject),handler(eventObject)) was deprecated and removed. Last year one of my clients needed to use this functionality as they had other triggers happening at the same time as the content being shown or hidden. To assist with the upgrade I wrote a plugin for jQuery called ToggleEvent that does something similar to the old .toggle() syntax. Just recently I upgraded another one of my work projects to jQuery 1.11.0 and I forgot about the loss of .toggle() being used this way. Luckily I remembered that I had solved the problem before and dusted off the ToggleEvent plugin.
Realizing that I had forgotten to both write about the plugin, and publish it, I have now done so at github: jQuery ToggleEvent plugin
Recently I was testing a function that generated slugs for an application. In order to make the slugs unique we would append the microtime to the slug if needed. After updating my data provider to account for using the microtime version of the slug I was receiving intermittent success for the test cases. Using microtime for generated data introduces a margin of error into the time between when the code is executed and when you compare the results.
I tried using microtime before and after the function call that generated the slugs, but I would still end up around ~0.0010 seconds behind the microtime that the slug generator was using. I could not figure out how to make these tests pass 100% of the time, within reason.
If you can’t control the generator for tests that involve random data you have 2 options:
- refactor to remove the randomness
- live with a degree of certainty
For the purposes of generating a slug with appended microtime, I determined that the degree of certainty was that the slug’s microtime at the seconds level will either be equal to or at most one second before the microtime of the test cases microtime call. If it is greater than 1 second difference then there is definitely a problem. The 1 second threshold could most likely be reduced (for example to 0.0010 seconds) if desired, but I needed to get the test written in a timely manner and a 1 second degree of certainty was acceptable at the moment.
After running into “session could not be started because it was already started with session_start() or session.auto_start” on a project, I realized that removing the cronjob is not the only thing that needs to happen to let PHP manage it’s own sessions.
I’m assuming that this more paranoid than usual security measure was a way to help inexperienced admins and developers to help prevent session hijacking should the web server be breached. However, if root is gained, it doesn’t matter anyway. I’m not going to say that I am an expert in security for servers, but I can tell you that Debian, and therefore Ubuntu, are the only ones doing this type of paranoid security practice. Coming from the FreeBSD world, you are responsible for the security of your machine, not the developers or port maintainers.
In a project for work I came across a bug in our CSS that would leave our nav bar in IE8 (possibly IE9) with a gradient that went from blue to dark blue. As part of this project we went with Twitter Bootstrap and are using Assetic and LessPHP to manage and compile the less files. I didn’t understand why we had this issue as we had the gradient, just the wrong colors. So I started digging into the less files, and then finally the compiled CSS.
While working on a project I kept receiving a blank screen after enabling a function that used Zend_Db_Table_ResultRow’s findManyToManyRowset(). The log file showed the following error, with a long stack trace of the FirePhp plugin looping on encoding the object.
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 83 bytes) in /mnt/hgfs/myproject/library/Zend/Wildfire/Plugin/FirePhp.php on line 778
In order to figure out the problem I chose to disable FirePhp, this seemed like the logical option considering how FirePhp was stuck.
The error was displayed by ZF, and after adding a primary key to the relationship DbTable ZF went on its merry way. In the end FirePhp was enabled and happily continued about it’s business.
I just recently had a client whose data import requirements changed mid-project. We started off with an INTEGER field for this particular value because it made the most sense. Once I got the real data from the client I found out that that there are duplicate pages (representing multiple items on a page) and some items span more than one page. My first reaction was “Great, now I have to refactor the entire table structure to handle these items.” It turns out that the cleanest solution was to change the INTEGER field to a VARCHAR field. This would allow for items to be marked as 1a, 1b or 1c indicating that the items were on the same page, and wouldn’t disrupt the client’s data (they are creating a simple CSV from other software). Perfect, that works as expected and we now have multi-items per page! Then I went to the listing page. Everything was sorted, by the unnatural sort algorithm.
The solution was to use
CAST(page AS unsigned) ASC, page ASC
in the ORDER BY. This accomplished a “natural sort” algorithm and works really well.
At work we are up to date with PHPUnit at 3.6, yet officially ZF1 will only support PHPUnit 3.4. Looking through the code there is some support for PHPUnit 3.5, but I actually make use of the additional assertions that PHPUnit 3.6 provides. Initially I had intended on just extending the Zend_Test PHPUnit classes but I quickly ran into the problem where the ZF classes didn’t implement methods in the interfaces when they were brought into the PHP interpreter. I then thought about copying over the Zend_Test classes and renaming them, this would have created a maintenance issue since we don’t want to maintain our own fork of the Zend_Test classes.
My final solution was simply to create my own DatabaseTestCase. This is what I came up with: