Ever since I moved out to California, one of the things that I’ve secretly wanted was to be able to listen to my favorite music while driving. The problem has always been that my favorite channel by far is the Vocal Trance channel off of Digitally Imported Radio. So this meant that I would need to be able to stream internet radio while driving in the car.
Well, today that day has finally arrived. I noticed yesterday that DI has an iPhone app that allows you to stream their premium channels over 3G. I was listening today for about an hour while out running some errands with Caitlin. I only lost the signal once for about 5 sec while driving in some hills, the app does a great job of buffering and keeping the music going. The quality is great and with ~1hr of streaming it only used 25Mb of data according to the built in meter (I’ll have to double check with ATT’s meter).
One of the greatest things about the app is that if it does cut out for any reason it can determine that it’s at the end of a stream and gracefully fades out so that there aren’t any jarring cuts in or out. Every streaming app should copy this.
For Chrome on the mac, it appears that RSS auto discovery is not included by default. This is the feature that puts a little RSS icon in the URL bar when the page that you’re on has an RSS feed available. In order to enable this feature for Chrome on the mac you need to install this extension. This is an extension from Google and seems to work great.
Then if you find a RSS feed that you’d like to add to ReadPath, drag the link below to your bookmark bar.
Add To ReadPath
Then when you’re on a page that you want to subscribe to, press the bookmarklet and it will have ReadPath subscribe you to the feed.
I’ve turned off comments on this blog for now. While I’ve gotten some great comments in the past, the volume of spam just isn’t worth the hassle. Instead I’ve put a mailto: link at the bottom of each article. This is the best way to get ahold of me anyway. If you send something relevant and worth sharing I’ll add it to the blog post.
Oh, I changed the name of the blog as well. Not as worried about having my real name on the web anymore.
I’ve spent the last several days playing with and configuring CDH3B2 with Hbase. My test cluster is using an ESXi server with 20Gb of ram to boot up a bunch of CentOS5 VMs. Definitely not something that you’d want to run in production, but it works great for testing. Actually helps to expose some bugs due to the slow IO of running a bunch of servers on the same disk.
My production cluster is still running HBase 0.20.3 and has performed flawlessly. It has a table holding half a billion content items, taking up several terabytes of space, and has made it through several disk failures without a hitch. However, I’m looking at the Cloudera distro because I’m not happy with having to repackage everything, test it out, push to the live cluster, and then retest to make sure that everything made it properly every time a new release comes out. I’m hoping that using the Cloudera distro will simplify a lot of this. I’m also hoping that with the patches that they include and testing being done that I’ll have a more stable cluster. I had a real bad experience with the 20.4 update which is why production is still on 20.3.
One major problem that I still have, even with the Cloudera distro, is that the LZO code isn’t included due to licensing problems. I’m really hoping that the new compression options can be packaged up soon so that these libraries don’t need to be maintained separately any more.
A couple quick notes that I found from my testing.
- The way that HBase interacts with and manages zookeeper has changed. It’s more like running an unmanaged zookeeper setup. I found that not only did I need to make sure that the zookeeper configs in the hbase-site.xml needed to be correct on all of the servers, but that when I ran map-reduce jobs against HBase that it seemed to be reading from /etc/zookeeper/zoo.cfg and that this needed to be correct on all of the regionservers. I initially had only edited it on the server running my zookeeper instance. I also added the file to the HADOOP_CLASSPATH in hadoop-env.sh but I’m not sure that that’s required.
- I wish that there was a better way to manage the HADOOP_CLASSPATH and it’s inclusion of the hbase and zookeeper jar files. I’m trying to find a way so that this doesn’t need to be edited each time I update the software to a new version.
- I had to change the value for dfs.datanode.socket.write.timeout. On the live cluster I have it set to 0 which is supposed to mean no timeout, but it appears that there is a bug with this latest version that doesn’t respect that value properly. So I just set it to a high number.
Lead story and headline on the NBC evening news again tonight talked about the “Controversial statement by President Obama about the NY Mosque”. If the news editors were being honest they might say that “President Obama today reaffirmed a belief in freedom of religion that has been a foundation of our country for 200+ years”. But of course that isn’t as exciting as saying that Republicans are trying to stir up fear and hatred of foreigners again.
It is quite sad to see this spin. Journalism’s purpose is to find the truth in the story instead of making up their own spin to sensationalize. But we’ve gotten to the point where the corporate interests behind the News has an interest in keeping the crossfire between the right and the left going. If this means taking some liberties, then they’ve shown that they’re willing to do so.
I wouldn’t be terribly sad to see almost every major news body we have today collapse. They’ve strayed too far from their true purpose. There will still be a need for honest news gathering and there are people willing to devote their lives to that purpose. The way just needs to be cleared for these people to come to the fore again.
I had to test out a desktop virtualization product (Pano Logic) this week and as part of the installation I needed a VMware ESX base system. I’m a huge user of their Workstation product, but I had never used the ESX line since it used to be so expensive and required certified hardware. Things have changed though and it’s now possible to download a copy of ESXi for free and to run without a dedicated SAN.
One of the difficulties with VMware is that their acronyms can be very difficult to wade through. ESXi is what they refer to as a hypervisor. This essentially is a very cut down operating system that is designed to only run other Virtual Machines. There are some requirements to running ESXi, I had to go through 3-4 servers before I found one that the installer had all of the drivers. I finally got it to run a server I had picked up from Penguin Computing (2x dual core Opteron with 4Gb mem and 250Gb hard drive).
Once I found a server that worked, the system installed quickly. The next problem was that you need to download the vSphere client to administer the server which is windows only (there are command line clients for other operating systems, but I wasn’t ready for that yet). I didn’t have a windows box laying around (all linux and mac), so I had to launch a WinXP VM in workstation on my linux desktop to administer my ESXi server. Amazingly everything worked great.
The next issue that I ran into was that I already had a large number of VMs created that I was using on workstation, but I couldn’t see how to get them on to the ESXi server. In the vSphere client there are clear instructions on how to create a new VM or download an appliance, but not how to import an existing VM. It turns out that VMware has a very simple way of doing this using the VMware Converter. This product works as a switchboard allowing you to convert or move VMs from one place to another, a really handy tool.
Overall ESXi is a great tool for running a whole bunch of server VMs. VMware offers a huge number of management products in the vSphere product line for managing load and moving VMs in a datacenter. But if you just need to run a few VMs on a single server I would definitely recommend looking at ESXi.
I just placed a pre-order for Amazon’s newest Kindle the other day. This one is to replace Jaimie’s Kindle since she was still using the very first generation model and there are quite a few updates in this newest version. Supposedly much better battery life, better contrast, and faster page turns, all great things for a power reader.
We also decided to go with the wifi only model, which is cheaper, but most of our book reading is at home where there is total coverage. If we’re out and about we read on the iPhone and then sync back to the kindle to continue reading at home.
Hopefully this latest version will arrive before the next round of books that we’ve been waiting for. I’m waiting for The Evolutionary Void and we’re very happy that MockingJay will also be released in a kindle version.
I was happy to hear that Bezos was focusing on creating the best book reader instead of chasing yet another tablet.
If you value your email, you’re going to want to hold off on upgrading your iPhone to ios4. There are several posts (here and here) detailing issues that users are having with the Mail app on ios4. For myself, email has become unusable.
- Often the mail icon will show that I have new mail, but when I open the app there is nothing there, even after hitting the refresh button.
- After I delete email they’ll show back up again as unread.
- I’ll often get a list of blank emails that I can’t do anything with.
Closing the Email app has no effect. The only thing that has helped so far is to double click the button to get to the quick switch screen, hold down the mail icon until they vibrate and the minus icon appears (I believe that this kills the background process), click the minus’ to close all running background processes. Once I do this Mail will begin to sync again. This leads me to believe that there are issues with communication between the background process and when you bring the app to the foreground.
I’m checking every day to see if Apple has pushed an update, but this was a pretty big bug to let slip into ios4.
It’s taken awhile, but I’m finding myself starting to use Chrome as a browser more and more regularly. Firefox has started to get bloated again. I just don’t see why it seems to take so much memory, if I let it run awhile it tasks close to half of the 4G I’ve got on my macbook. Things start to bog down after a day or two of running and maybe I’m being too demanding, but I just don’t feel that I should have to restart a browser periodically to maintain performance.
Overall I’m just happy to see that I have a choice. While performance and stability are great, I hope there’s more competition among the browsers as web application platforms and not just web content distribution. There’s still a long way to go in this area.
There’s been a lot of discussion on the web over the last several weeks on how much trust we can put into Facebook when it comes to handling private data. They’re making a play to be the primary repository of identity on the web. The hub that other web sites link off of to determine personal connections and demographics for a user. Facebook already has a huge lead in this area with 400+ million users that are using actual names instead of screen names.
It would be nice to have a place where we can set up who our friends and coworkers are as well as what we’d like to share with them. Having to recreate this network each time we want to use a new site is a complete pain. But who can we trust to store this valuable info?
Leo Laporte has mentioned in his podcasts that for some reason he just doesn’t trust Facebook as much as he would trust a company like Google to fill this role. I agree wholeheartedly that it’s risky to trust Facebook. For me, the core of this mistrust is that I feel that Facebook hasn’t yet found its truly profitable niche yet like Google has. Google makes so much money in search ads that it can afford to not make money in other areas and take the high ground when it comes to privacy and openness. Facebook doesn’t have this profit center yet to support the other areas of its business. The scary part is that the data that Facebook collects could be quite valuable. It’s really going to come down to where they decide to draw the line on how to use our data. And because this is still unknown and they’ve taken several missteps in the past, it’s difficult to really trust Facebook.
In tonight’s interview with Dan Stein, the head of FAIR, I believe that Rachel went over the line. She brought up some very questionable position points from a handful of people that are or were connected with the organization. The worst being some ideas taken from a founder of FAIR written in the 1980s.
As the interview went on though, I actually found myself agreeing with Dan Stein that this was McCarthy style journalism. Taking a single point from years ago and using it to paint an entire organization. I think Rachel was riled up and went into attack mode too quickly. The proper way to ask about these points would have been to have asked if FAIR, as an organization, still supports the beliefs that she brought up in her research or whether they’ll disavow the positions. If they won’t disavow the racist positions, then there is a story there, but for Rachel to make the accusation that because someone once said something reprehensible that the entire organization was racist was itself unfair.
Usually I’m a big fan of her reporting, but I think that in this instance she should have looked at what she had and decided that it wasn’t enough to build a story off of. She’s developing a great reputation as being a journalist that will stand up to anyone and ask tough questions, she’s just got to make sure that the quality of the story is there.
There was a bit of questioning of the Risk Management Manager that when added to that of the CFO led to some very interesting insights into Goldman Sachs. The market making staff has the ability to hold or sell items that are taken opposite those of the customers at a time and place that maximizes the profits for Goldman Sachs.
Goldman Sachs as a firm has internal views into almost every market there is. Seeing both sides of trades as well as the demand for creation of new products. The Manager would not say that internal talk among different groups wasn’t used in the determination on how to handle products that Goldman itself held. He squirmed out of taking any position.
The clear implication that you can take from these different data points is that Goldman Sachs has the ability to hold any product that they like and sell those that they don’t. They are not a neutral market making entity. They don’t see selling products that they don’t like to their own clients as a conflict of interest. Customers of Goldman Sachs need to ask themselves why the firm is selling any product. Because, if the item up for sale was worth holding, Goldman itself would hold it and not sell to the customer.
Goldman Sach’s ability to take positions and to be a market maker should be stripped. There should be a requirement that they can’t profit by taking positions as a market maker. There is no way to separate the conflict of interest otherwise.
I’m watching the testimony live on Bloomberg TV right now and a couple things are sticking out.
- The question is coming up about the ability for a party to be a market maker and to also take part in the market. With highly liquid stocks, it seems like this is OK since there is a fair amount of visibility into the market. There isn’t an information gap between different parties. But with illiquid assets like the CDOs, knowing who is involved and moving in which direction gives you a huge advantage over other players. It would only take a little bit of info to shift the advantage dramatically. If the Senators are smart they’ll see that with these types of assets market makers shouldn’t be allowed to take part in the market. It’s back door insider trading.
- There is another line of questions on rating the CDO. Goldman Sachs is sticking with the line that everyone was a big boy and should have been able to look at the security and determine the risk. The problem is that this has been proven to be completely false. No one was able to accurately determine the risk of these items. The fact that they allowed Paulson to pick the contents and then short the CDO gave him a huge advantage in the deal. Whether this was illegal or not isn’t clear, it should not be allowed to happen again.
- The Goldman Employees referenced the mathematical models that allowed rating agencies to take BBB+ sub-prime mortgages and rate them as a group as AAA. It’s shocking how far from reality this model was. It’s these types of models and economic thinking that has spurred my interest in this area. How can the supposed top economic minds get things so wrong?
So, I was doing the laundry today and realized that at 18 months Caitlin has hit a critical milestone. She now has more clothes than her dad, BY VOLUME, and I’ve got some pretty big clothes. I suppose I should just accept this and realize that as she moves on towards high school and college it’ll only get progressively worse, but it was still a bit of a shock.
What a crazy idea, ask an actual socialist what they think of Obama’s so called “Socialist” policies. Turns out real Socialists aren’t that happy with Obama, the candidate closest to real socialism is actually Sarah Palin.