FC6 KDE Suspend on a T42 Thinkpad

Just spent a good part of the evening upgrading my linux T42 Thinkpad to FC6. Wanted to see if I could get suspend to memory to work since I never got it to function properly in FC5. This page, led to the very simple solution.

Just needed to add a line to my grub.conf to set acpi_suspend=s3_bios.

Update:
Gotta say that I’m pretty impressed with the latest out of Fedora Core. Everything just pretty much works on my laptop. Exactly what you want.

How to prepare for a java interview

I just wrapped up another tour of the interview circuit. Not exactly how I would prefer to spend several weeks of my time, but alas it comes with the territory. I noticed a few things during this period.

The first point that I would like to make is that as a potential interviewee, you need to start preparing for the next job WAY before you’re really ready to start looking. Get your resume prepared and take a look at what it’s got on it. Are there areas where you’re weak that you could add some experience while you’re still at your current job? You need to take on successful projects and show motivation, but all of this needs to happen now and not later. The best way to get a great new job is to show that you’re already a great employee.

Secondly, it’s always better to get a job through a personal reference. It might actually be worth holding off looking until someone can introduce you to an opportunity. Some companies won’t even consider an applicant unless there’s a reference before hand, it’s just too risky for them. So, make sure you have good relations set up before hand with your coworkers and know where they’re going if they leave.

I’ve found that interviewing is the worst way to get to know a candidate except of course for all of the other ways. There is a big spread out there between good interviewers and poor interviewers. Sometimes after a questionable interview, I’ve gone back over the questions asked and tried to determine what the interviewer is selecting for with each question. Each question should tell you something about the interviewee no matter how they answer. As an interviewer, you want to make sure you’re selecting for traits that you would desire in an employee. If you ask a question that involves some minutiae of a technical spec and the interviewee doesn’t know the answer, what have you learned? Would you refuse to hire someone because they didn’t know a detail that could be looked up on Google in 10secs?

To be prepared for the bulk of questions that will be asked in a Java Software Engineer interview you need to study up on two books before you start, Concurrent Programming in Java(TM): Design Principles and Pattern and Sun Certified Programmer & Developer for Java 2 Study Guide. With these two books you can handle 75% of the questions that interviewers often ask, if you want to get that level up around 90% look up the Singleton design pattern in the GOF book.

I’m often left wondering after interviews why so much importance is put on these types of questions while other aspects of the potential employee are completely ignored. There’s a whole lot more to being a great employee than just having encyclopedic knowledge of software specs. Creativity, flexibility, social interactions, business and product understanding. All of these are ignored completely in most software interviews. Maybe it’s just because they’re harder subjects to nail down?

One interesting trend that I’ve noticed is the increase in tests and homework assignments for the interviewee that are performed offline. This allows the interviewer to get a slightly better feel for how the candidate will perform. I personally feel this is a positive trend.

A little bit of sanity returning to the world

I saw two posts today showing that the pendulum of reason is finally swinging back in the right direction. The first was over at Glenn Greenwald’s Blog and deals with making good on the intelligence mistakes of the last several years. Only problem is that it’s the Canadian Government that’s decided to come clean. The US Government still prefers to sweep things under the rug and hope no one looks.

The second piece was by the director of public prosecutions of the UK, Sir Ken Macdonald. Where he discusses the problems with the idea of having a war on terror. I found this post through Bruce Schneier’s Blog and believe that the discussion of the post is worth reading.

Suggestion for Google Video

I love using Google Video for their Google Eng Edu videos. This series of ~1hour long video of talks given on all sorts of techie topics is great. I love being able to get an inside access to these presentations. It’s just like being at school again and having the weekly presentation from the visiting professor.

The only issue with the format is that while it’s great to have the long duration video, instead of the normal 5-15 minute videos being pushed by other sites, it’s a pain to deal with the files. The optimum way for me to watch the videos is to batch them up and then watch them when I have some free time, say on a flight across the country. For this to work though you need offline access and while Google allows you to download these files, it just doesn’t work well. You essentially have to pretend that you’re watching the video on pause and let the player download continue in the background. Sometimes this works, sometimes it doesn’t and the worst part is that you can only do one video at a time.

The system is almost there, if Google would allow you to download multiple videos in the background so that you could watch them at your convenience it would be optimal. Of course I’m sure there are revenue issues, whole lot of bandwidth being used with no ads being served. However, I would sit through a 30 sec ad at the beginning in order to get this feature.

InnoDB faster then MyIsam?

There’s an interesting post over at the Mysql Performance blog testing performance differences between several storage engines. Their tests show that for some micro benchmarks that cover a lot of the basic usage patterns of databases in a web type environment, that InnoDB can actually be much faster than MyIsam. This goes against the prevailing belief that MyIsam was the fastest for read access especially in the very read heavy world of Web applications and that InnoDb was only used when transactions were required.

I’ve used InnoDb for the backend of my latest personal project, Cloudgrove, which actually has a heavier tilt toward write performance. I’ve always been concerned about read access in very large threaded systems when there are some write access occurring concurrently. The table level locking of MyIsam seemed like it could cause problems as all reads would be blocked while a write occurred. With InnoDb there is row level locking so that writes would not block at the table level.

I’ve got a good feel for how very large MyIsam tables handle load while doing performance work at webshots, using ~20 large db servers in Read / Write Master – Read Only Slave setups. I never had a chance to see how a similarly structured InnoDB setup would behave though. As Cloudgrove grows though I should get a good view on how it will behave.

I’m also using foreign keys currently to enforce data consistency and that could have a much larger negative impact on performance. Might have to pull those out and move data consistency up to the application layer.

Getting it wrong on Net Neutrality

There was an interesting piece by Cody Willard from the thestreet.com posted the other day about the cracks forming in Google. I feel the article managed to get Google’s stance on Net Neutrality completely wrong. I do agree though that there are some cracks forming in Google’s reputation. The general public is starting to realize how much data that Google collects. As long as you trust them then that’s ok, but what if they lose that trust, it’s hard to take the data back at that point. Google has shown that they’re not infallible with a couple of devastating security holes in the last couple weeks. The holes were quickly fixed, but these little chips could be just the start. The fact that even the mighty Google can screw up sometimes is going to bring people to reexamine the trust they put into that company.

The final point of the article from thestreet.com though claimed that Google was evil for supporting Net Neutrality. I found it rather difficult to fathom how someone could make this claim. You can either claim that the telecoms are evil for trying to subvert the networks, where evil here is used to mean that they are substituting the publics long term gain for their own short term gain. However, to claim that Google is evil for trying to stop the telecoms makes the assumption that the telecoms are doing good. I have yet to see anyone argue that a tiered internet is good for anyone other than the telecoms.

My reasoning on net neutrality:

Well to start at the beginning, the core of the problem is that the internet was designed to be a robust way to move data from point A to point B, but never made any guarantees about when it would arrive, the End-to-End principle. All sorts of fail-safes and checks were put in to make sure that if a router went down that traffic would automatically find other routes to get to its destination. This allowed the internet to grow in the face of different companies creating the backbone with different hardware and also various hardware failures and back-hoes causing interruptions. For most people everything worked great, sometimes pages would load a bit slowly or your email would take 30s instead of 5s but it would all get there.

The problem became noticeable when technology started to come along that required that the data get to the destination within a certain time frame, ie. voice and video. When you have enough excess bandwidth capacity it’s not a problem, but when network links get up around 80% full, you start to see a lot of collisions where packets run over each other and need to be resent. This is horrible for these time sensitive applications because it creates a delay.

The next issue that contributed to this problem is that with network links sold to consumers the telecoms modeled usage off of past behavior where consumers were checking a little email and browsing a few web pages. With these models in mind, they’re able to oversell their networks and maximize their profits. If you’ve ever used a cable modem in an area that’s been oversold in the evening after work it can be incredibly painful. Also, if you read the fine print in most of these contracts, just because you bought a 6Mb/s connection doesn’t mean that you can use it and if you do use it you’re going to get kicked off of the network as an abuser. With the rise of Vonage, Youtube, and other wildly popular sites the average usage of the consumer was skyrocketing and people were beginning to notice that they weren’t getting what they were sold.

So at this point there are two solutions for the telecoms, increase capacity and give the consumer what you told them they were supposed to get or tweak the network so that for certain types of traffic you could make the guarantee that it would get there on time. One is expensive while the other gives you a way to charge extra. It’s completely obvious why the telecoms chose the way they did as a public company. The problem is what are the implications for users of the internet as a whole?

If the telecoms have a special value added network where they are making all of their profit what is their motivation to make any expenditures to improve the standard network? And why with their past history of hyping improvements and then not following through would we believe them if they told us they would maintain the old network? Wouldn’t this just lead to a balkanization of the internet? Would the democratizing force of the internet be destroyed as the old internet rusts away and content is controlled on the new?

I would argue that the answer to all of those questions is yes. The internet has become a critical piece of infrastructure that needs to be kept open for the long term good of businesses and consumers and we have seen that amazing things can come out of an open internet. Things that no one at this point in time can even imagine.

“Google says it’s acting in the best interest of consumers and end users. Why the use of force then? A truly “non-evil” company would have no interest in using governmental force to stop attempts at innovation.”

Here you are claiming that Google is evil for lobbying, yet Verizon is just innovating, even though Verizon’s lobbying effort is much more extensive and Verizon’s innovation is really just attempts to sidestep building out their network.

“Google’s evil here stems from the fact that it knows it has won this version of the Internet and wants the government to make sure it stays on top.”

This just doesn’t even make sense. Google and Verizon aren’t in the same business and don’t compete. Google is a customer of the telecoms. They pay huge monthly bandwidth bills to the telecoms for the right to use their network. Google is not getting anything for free. If Verizon doesn’t feel that they’re being fairly compensated for services that they’re supplying then why don’t they raise their rates? The issue is again that Bandwidth is Bandwidth and there is no value add there.

I would argue that Verizon may be jealous of Google’s ability to charge for what it provides, but jealousy doesn’t mean that you’re going to get your way. The reality that Verizon has to deal with, is that no one cares which network they use as long as they can get to the end points that they care about, hence Google has pricing power while Verizon does not.

It’s official

I’m bad luck for the Wolverines (my alma mater). I’ve watched two games so far this year. They lost their first game to OSU while I was watching and are losing their second game to USC (start of the fourth period as I’m writing this). Gotta get back to my normal routine and NOT watch football if I want them to win.

Update: Yup, they lost. No more football for me.

The Computer’s Rate Limiting Step

Back when I was still working in the pharmacology lab doing chem work, there was a term used when researching biological pathways, Rate Limiting Step. In a complex biological system with multiple steps and feedback loops you would need to determine how fast the reaction could occur. Often times you would find pathways that were essentially one way because the forward path would be so much faster than the reverse. In these systems you measure the rate of the overall system by the speed of the slowest part, ie. the rate limiting step.

The key piece of this idea is that it doesn’t matter how much faster you make steps 1,2,3, or 4 if step 5 is the rate limiting step. The overall system will get no faster. Unless you increase the rate of step 5 the system just can’t move any quicker as intermediates pool at step 4 and have to wait to be processed by step 5.

Now that I’m working with computers I’ve found that a user’s experience with a computer works the same way. CPUs and memory have been getting faster and faster every year so that people can do increasingly more complex things with their computers. Other parts of the PC have not faired as well though. Hard Disks have not been able to increase their speeds at the same rate as other parts of the system even while their capacities have skyrocketed.

However, the area of the PC that has become the rate limiting step in the average person’s computer is now the internet network bandwidth. The Internet has become such a core part of the computer and while the computer itself has become enormously faster, the internet connection just hasn’t increased at the same rate. When you’re uploading photos or downloading music and videos, it just doesn’t matter how fast your computer is anymore. Any computer produced in the last 2 years will do just fine. What will make the bigger difference has more to do with the whether your on T1 or dsl or dialup.

This has some interesting implications when it comes to computer sales. I believe that we’ve hit a plateau with CPUs for now. Intel and AMD are trying to hold onto something new with all of the multi-core marketing, but for the vast majority of people, it’s just not going to make a difference. Computers have become a commodity, the models that are still taking a premium involve higher quality and not necessarily higher performance.

The next leap in computing performance is going to involve companies building infrastructure and providing content and applications. Not sure if the Telcos are going to be able to lead the way here, they’re so much better at slowing or stopping innovation rather than driving it. It will be exciting to see who can lead the charge here.