Every programmer occasionally, when nobody’s home, turns off the lights, pours a glass of scotch, puts on some light German electronica, and opens up a file on their computer. It’s a different file for every programmer. Sometimes they wrote it, sometimes they found it and knew they had to save it. They read over the lines, and weep at their beauty, then the tears turn bitter as they remember the rest of the files and the inevitable collapse of all that is good and true in the world.
Programmers. Constantly underrated, even though no one but them understands what they do.
Also: frick’n brilliant post.
I’d write a plain old JS equivalent but trying to wrap my head around all of the indirection in the above example is making me want to crawl under a desk and bang my head on the floor until the brainmeats come out so I don’t have to subject myself to this madness any further.
Because everyone loves a good rant.
Everywhere you go nowadays you have access to open WiFi networks (hotels, bars, public hotspots etc). The problem with these is that the traffic isn’t encrypted. This means that if the server you’re connecting to isn’t running your traffic over SSH, it is open for everyone to listen to in plaintext. So, to solve this we’ll setup a VPN server on our Pi, and connect through that when on public networks.
First off, lets update our package lists and packages to stay up to date with the latest and greatest:
sudo apt-get update sudo apt-get upgrade
Next, lets install the VPN server software.
sudo apt-get install openswan xl2tpd ppp lsof
Once the installation finishes, it’s time to configure the server.
IPSEC, Iptables and network settings
The first thing we are going to configure is IPSEC. Start by editing
sudo nano /etc/ipsec.conf
and change the following:
At the end of the file , append the following:
conn L2TP-PSK authby=secret pfs=no rekey=no type=tunnel esp=aes128-sha1 ike=aes128-sha-modp1024 ikelifetime=8h keylife=1h left=192.168.1.212 leftnexthop=%defaultroute leftprotoport=17/1701 right=%any rightprotoport=17/%any rightsubnetwithin=0.0.0.0/0 auto=add dpddelay=10 dpdtimeout=20 dpdaction=clear
The mapping over port 0 in the setting
rightprotoport=17/%any is a workaround for OSX based clients (such as Macs and iOS devices). The IP in
left=192.168.1.212 needs to be changed to the IP address of your server running the VPN software.
Once done, save and exit. Now we need to setup rules for the firewall and IP traversal. First, lets make the changes needed for the system to remember the changes on reboot.
sudo nano /etc/sysctl.conf
In this file, edit the following settings:
net.ipv4.ip_forward=1 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.all.send_redirects = 0
Next up, lets define a firewall file which will apply the IPTables settings on bootup:
sudo nano /etc/firewall-rules.sh
Enter the following in the file:
#!/bin/sh IPT="/sbin/iptables" $IPT --table nat --append POSTROUTING --jump MASQUERADE
If you have any other rules you want to setup on boot, just add them as well. Save and exit. Make the file executable and owned by root:
chown root /etc/firewall-rules.sh chmod 700 /etc/firewall-rules.sh
Now, let’s add it on network initialisation:
sudo nano /etc/network/interfaces
In this file, right after your default network interface add a initialisation of the script:
For reference, I use only the ethernet port, and this is what my config looks like:
iface eth0 inet dhcp pre-up /etc/firewall-rules.sh
For now, lets apply the rules manually:
sudo iptables --table nat --append POSTROUTING --jump MASQUERADE
And the network settings needed:
for vpn in /proc/sys/net/ipv4/conf/*; do echo 0 > $vpn/accept_redirects; echo 0 > $vpn/send_redirects; done
For some reason I couldn’t get that for loop to work via
sudo; if that’s the case, simply do
and execute it as root.
Now, verify the settings with the following:
Start configuring L2TP by editing the main L2TP configuration file:
sudo nano /etc/xl2tpd/xl2tpd.conf
Append the following at the end of the file.The local IP should be the IP of the server running the VPN software. The ip range should be a range of addresses that are distributed to the connecting clients, and shouldn’t be in the normal DCHP IP range. Other this noting is that we will be doing authentication via PPP/PAM, so all users must be real users on the server. This has the advantage of the passwords being saved in
/etc/passwd, and encrypted. We will also reject mschap due to vulnerabilities in that protocol.
[global] ipsec saref = yes [lns default] local ip = 192.168.1.212 ip range = 192.168.10.235-192.168.10.250 refuse chap = yes refuse pap = yes unix authentication = yes require authentication = yes ppp debug = yes pppoptfile = /etc/ppp/options.xl2tpd length bit = yes
Save and exit. Next up, the options file we just specified above.
sudo nano /etc/ppp/options.xl2tpd
Enter the following settings:
ipcp-accept-local ipcp-accept-remote ms-dns 192.168.1.212 noccp auth mtu 1200 mru 1000 crtscts hide-password name l2tpd proxyarp debug lock connect-delay 5000 login
The login statement on the last line is needed for the PAM authentication to work. The
ms-dns setting should point to your primary name server. If you are uncertain which that is, look it up in
What you are looking for is the first line starting with nameserver. Note the IP address. If you’ve previously setup your Raspberry Pi as an Ad blocking server, you’ll likely want to point to that machine as your name server. It is possible to add multiple servers as well, it you want to set up a chain of fallback servers.
Now, we’ll add the services to
rc so they’ll start automatically on bootup:
sudo update-rc.d ipsec defaults sudo update-rc.d xl2tpd defaults
Now it’s time to add the users you need for VPN:
sudo adduser vpnuser1 sudo adduser vpnuser2
Remember the password you set for the user, since that will be used to authenticate later on. Also, make it a VPN user only by denying it standard login on the machine.
sudo usermod -s /sbin/nologin vpnuser1 sudo usermod -s /sbin/nologin vpnuser2
Now we need to setup the IPSEC secret:
sudo nano /etc/ipsec.secrets
In this file, add the following:
%any %any: PSK "asupersecretverylongkey"
where the key (of course) is better chosen, preferably long and random. Next up, setting up the allowed users for PAP, and assigning static IP addresses for them.
sudo nano /etc/ppp/pap-secrets
Here, add the following for each user you have set up. In my case, I’ve set up 2 users, and as such I assign 2 IP addresses from the range we specified in
/etc/xl2tpd/xl2tpd.conf. The first column name is the username, the second is the name of the service we specified in
/etc/ppp/options.xl2tpd. Third column should be an empty string, since we’ll be using pam for authentication. Last column is the IP address that will be assigned to the connecting user.
vpnuser1 l2tpd "" 192.168.10.235 vpnuser2 l2tpd "" 192.168.10.240
Save and exit. Finally, restart all services.
sudo /etc/init.d/pppd-dns restart sudo /etc/init.d/ipsec restart sudo /etc/init.d/xl2tpd restart
Since we are going to use our VPN solution from outside of our home network, we need to open up the ports 500,4500 and 1701 in our router.
You should now be able to connect using your preferred device both from within the network and from the outside.
There is a risk that I’ve missed something when documenting this. I tried documenting everything along the way, but during my own troubleshooting I kept going back and changing settings. If for some reason it doesn’t work, please let me know. Better still, if it doesn’t work, and you are able to fix it, let me know what the problem was and how you fixed it.
If (for some reason) not everything is working as intended, the following files are a good source for troubleshooting:
To set up an adblocking server which blocks ads for everyone on the network.
How does it work?
We’re going to set up a local DNS server, for which we will set up rules relating to ad connected queries. These requests will be directed to a local web server, which always returns a 1x1px transparent gif. Other requests will be looked up in real DNS servers and returned to the requester.
What will be installed
First off, lets download and configure the web server. In a suitable directory (where you will want the server files to be), download the pixelserv script, change it to executable and edit the file:
wget proxytunnel.sourceforge.net/files/pixelserv.pl.txt mv pixelserv.pl.txt pixelserv chmod 755 pixelserv nano pixelserv
In the file you will need to configure the IP and port you want the server to run on. Since we’ll be using it to redirect ad requests, I recommend you run it at port 80 if possible. The IP should be the IP address of the machine running the server. If uncertain, check it by running
and look for the section belonging to the ethernet interface:
eth0 Link encap:Ethernet HWaddr b8:27:eb:f5:85:df inet addr:192.168.1.212 Bcast:192.168.1.255 Mask:255.255.255.0
The inet addr is the IP address you’re looking for.
Next off we’re going to install DNSMasq, a lightweight DHCP and caching DNS server. It will allow us to set up rules for specific hosts (announce local machines on the local network, and override global DNS announcements).
DNSMasq is available in the official repository, so install it from there:
sudo apt-get install dnsmasq
Once apt finishes, edit the DNSMasq configuration file:
sudo nano /etc/dnsmasq.conf
At the end of this file, append an external reference to an adblocking dns file:
Save and exit. Edit the new file referenced at the end of the main configuration file:
sudo nano /etc/dnsmasq.adblock.conf
In this file add your chosen rules. The following is an excerpt from the file I’m using; each row states that for the given address (rotation.affiliator.com), the IP address stated is returned (192.168.1.212). The IP that is returned should be the IP for the machine that is running Pixelserv (which in this case happens to be the same machine we’re configuring):
address=/www.adtoma.basefarm.net/192.168.1.212 address=/rotation.affiliator.com/192.168.1.212 address=/adserver.adtech.de/192.168.1.212 address=/track.adform.net/192.168.1.212
Save and exit. Configure your Pi to use itself as the primary DNS server, and fallback on OpenDNS and Google public DNS:
sudo nano /etc/resolv.conf nameserver 192.168.1.212 nameserver 184.108.40.206 nameserver 220.127.116.11 nameserver 18.104.22.168 nameserver 22.214.171.124
Save and exit, and restart dnsmasq:
sudo service dnsmasq restart
The last step is to configure your router to point to the DNSMasq server as the primary nameserver. Done.
The Raspberry Pi is a perfect target for setting up your own wireless backup solutions if you live in a Apple centric environment. This guide will be based on a Raspbian install.
- Raspberry Pi, Model B
- Raspbian Wheezy
What will be installed
- HFS utils
- Netatalk 3.0.1
Mount external HFS+ partitions
First of all we need to be able to read and write to HFS+ partitions. In order to do that, we need to install HFS packages, and to mount our partitions properly. Lets start by installing the needed packages:
sudo apt-get install hfsplus hfsutils hfsprogs
I personally partitioned and formatted the external drive with Disk Utility on my Macbook Pro and just hooked it up to one of the USB ports on the Pi afterwards, but feel free to prepare the disk any way you want. Once you’re done, connect the disk to your pi and check the name of your disk.
This will output something like the following:
/dev/mmcblk0p1: SEC_TYPE="msdos" UUID="3312-932F" TYPE="vfat" /dev/mmcblk0p2: UUID="b7b5ddff-ddb4-48dd-84d2-dd47bf00564a" TYPE="ext4" /dev/sda1: LABEL="EFI" UUID="70D6-1701" TYPE="vfat" /dev/sda2: UUID="3d6b835f-2632-3319-b4bf-ff23f9dc1260" LABEL="CloudCity" TYPE="hfsplus" /dev/sda3: UUID="c10e0de5-f900-3ecb-8780-89b815e02450" LABEL="Bespin" TYPE="hfsplus"
Note the last 2 lines of the output; these correspond to the partitions created on the external disk, and are what we’ll be configuring for automounting.
Create mount points for your partitions; I have 2 partitions, so I created 2 directories to mount the partitions to.
sudo mkdir -p /mnt/bespin sudo mkdir -p /mnt/cloudcity
In order to get the disk to automount on bootup, edit fstab and insert entries for the new partitions there:
sudo nano /etc/fstab
Insert the following, using the information you looked up earlier:
UUID=3d6b835f-2632-3319-b4bf-ff23f9dc1260 /mnt/cloudcity hfsplus force,defaults 0 0 UUID=c10e0de5-f900-3ecb-8780-89b815e02450 /mnt/bespin hfsplus force,defaults 0 0
From now on you will be able to reboot without having to login and mount the partitions manually. For now, lets mount the configured partitions manually and verify the setup in fstab:
sudo mount /mnt/bespin sudo mount /mnt/cloudcity
Avahi, DB and security packages
Avahi is a open source implementation of Apples ZeroConf specification, which is what we’ll be using to announce the availability of the AFP shares in the network. We’ll also be installing database utils and crypt packages for security.
sudo apt-get install avahi-daemon libavahi-client-dev libdb5.3-dev db-util db5.3-util libgcrypt11 libgcrypt11-dev
Netatalk is what handles the actual AFP shares in the network. Using version 3 greatly simplifies the setup compared to version 2, which most guides I’ve come across is based on. Netatalk is not available in apt (at the time of writing), so we’ll set it up manually instead. First, download the sources from sourceforge. Unpack them in a suitable directory
cd mkdir tarballs cd tarballs wget http://prdownloads.sourceforge.net/netatalk/netatalk-3.0.1.tar.gz?download mv netatalk-3.0.1.tar.gz\?download netatalk-3.0.1.tar.gz tar -zxvf netatalk-3.0.1.tar.gz cd netatalk-3.0.1 ./configure --with-init-style=debian --with-zeroconf && make && sudo make install
This will take some time; excellent opportunity for a coffee break. Next up, configuring Netatalk. First we’ll create a user for the time machine share:
sudo adduser timeuser
Change the permissions for the Time Machine drive:
sudo chown -R timeuser:timeuser /mnt/cloudcity
Next, it’s time to edit the config file for netatalk:
sudo nano /usr/local/etc/afp.conf
Here we’ll set up our shares. The Global section contains settings that applies for the entire Netatalk server. Important here is to change the IP mask so it matches your network; I allow access for everyone on my subnet. The following section is the AFP share meant for Time Machine. Note that we’ll use the same user we created earlier with the share.
[Global] hosts allow = 192.168.1.0/24 log file = /var/log/netatalk.log [Time Machine] path = /mnt/cloudcity valid users = timeuser time machine = yes
Save and exit. Now, let’s restart the service:
sudo service netatalk restart
Now your Raspberry Pi should appear in Finder, and you should b able to connect to it with the credentials you set up earlier.
By editing (or creating if it doesn’t exist) the avahi conf file for afpd, we can make the icon for the Raspberry Pi appear as if it was a Time Capsule as well.
sudo nano /etc/avahi/services/afpd.service
Edit the file as follows:
<?xml version="1.0" standalone='no'?><!--*-nxml-*--> <!DOCTYPE service-group SYSTEM "avahi-service.dtd"> <service-group> <name replace-wildcards="yes">%h</name> <service> <type>_afpovertcp._tcp</type> <port>548</port> </service> <service> <type>_device-info._tcp</type> <port>0</port> <txt-record>model=TimeCapsule</txt-record> </service> </service-group>
As seen in the txt-record, the computer will now appear as a Time Capsule. Now, just restart avahi:
sudo service avahi-daemon restart
And you should be good to go. Now all that is left is to edit the Time Machine preferences and add your newly created Time Machine share as a Time Machine drive and it will (hopefully) start backing up.
I can’t even say what’s wrong with PHP, because— okay. Imagine you have uh, a toolbox. A set of tools. Looks okay, standard stuff in there.
You pull out a screwdriver, and you see it’s one of those weird tri-headed things. Okay, well, that’s not very useful to you, but you guess it comes in handy sometimes.
You pull out the hammer, but to your dismay, it has the claw part on both sides. Still serviceable though, I mean, you can hit nails with the middle of the head holding it sideways.
You pull out the pliers, but they don’t have those serrated surfaces; it’s flat and smooth. That’s less useful, but it still turns bolts well enough, so whatever.
And on you go. Everything in the box is kind of weird and quirky, but maybe not enough to make it completely worthless. And there’s no clear problem with the set as a whole; it still has all the tools.
Now imagine you meet millions of carpenters using this toolbox who tell you “well hey what’s the problem with these tools? They’re all I’ve ever used and they work fine!” And the carpenters show you the houses they’ve built, where every room is a pentagon and the roof is upside-down. And you knock on the front door and it just collapses inwards and they all yell at you for breaking their door.
That’s what’s wrong with PHP.
This pretty much sums up how I feel every time I get involved in a project where the code is written in PHP. Whatever you do it just feels wrong, and if you read the entire linked blog post you get a pretty good idea why.
Also, all open source projects written in PHP I’ve come across have had the following in common:
- poorly structured
- hard to read
- difficult to debug
As I see it PHP promotes bad design, which isn’t a good trait for a programming language. The question is who’s to blame: the language for teaching developers bad habits, or the developers for accepting the language for what it is?
Sparrow is the best mail client for OSX, hands down. It gets a lot of details right. Facebook integration for avatars. Excellent threading for conversations. The way attachments are handled through cloud services is simply brilliant. 1
So, obviously it is a big deal when Sparrow for iPhone hits the app store. Some of the features have made the transition from OSX, such as the Facebook integration. Some didn’t, such as the way OS X handles attachments. I’ve spent the last days on Sparrow for iPhone, and it is a great client. As expected, it’s miles ahead of Mail.app. But all is not perfect either.
The lack of push notifications is not a problem, but it is annoying. This really is a problem with how Apples App Store guidelines. Today there simply isn’t a way to let an app like Sparrow have client side push. The alternative is to give your credentials to a server run by the Sparrow team, which in turn logs in, checks your mail and notifies you. Problem is, by doing that you’ve handed over your credentials to the most sensitive service you use; the one where all your passwords can be requested to.
In my setup I work around the problem by leaving my home computer running at all times, and have it check my mail. If I get new mail, I forward the notification with Prowl, which then has a redirect set up to open the main view in Sparrow. Not perfect, but it works. If my home connection goes down, I stop getting notifications. That in the other hand IS a problem.
If Apple opens up the possibility of using background processes in a way similar to thise used by VOIP apps, Sparrow can use local push notifications. This would be a much better solution.
For now it remains a good e-mail app for those able to run a 2 component setup like myself. For the rest, the appeal is a bit more limited.
Attachments have long had a problematic relationship with e-mail. Sending files through services such as Dropbox or Cloud is faster and keeps the email itself leaner. ↩
Our development team recently got inspired by this project, and decided to get our own Thunder Missile launcher to launch missiles on developers pushing non working code for review. This beauty now sits on a workstation waiting for someone to break the build. We currently use Hudson as our Continuos Integration server, which in turn triggers the build on Gerrit events. So, each time someone submits for review, they are a potential target.
The missile launcher itself is controllable through a web server on the machine it is connected to. It is possible to control its behavior, as well as have it fire upon people be simple passing commands to the web server (but don’t tell anyone). To keep everyone focused it randomly patrols the room as well (random movement pattern on random intervals).
Things like these are not just fun and gimmicky. It also tells you something about the workplace itself. Being able to work someplace where setting up a something like this without management having any objections is a privilege. It builds team spirit, and keeps motivation up.
Needless to say, everyone on the team loves it.
Speaking of the developer guidelines for Android: the inconsistent behavior of the back button is the single most common irritation I’ve read when it comes to Android. If developers adhere to the new guidelines for Navigation with Back and Up this should mostly not be a problem any longer. A bit puzzling though why the guidelines is consistent in all cases but one: System-to-app navigation. I quote:
If the your app was reached via the system mechanisms of notifications or home screen widgets, Up behaves as described for app-to-app navigation, above.
For the Back key, you should make navigation more predictably by inserting into the task’s back stack the complete upward navigation path to the app’s topmost screen. This way, a user who has forgotten how they entered your app can safely navigate to the app’s topmost screen before exiting it.
Can’t really see why you would want to ruin a consistent behavior like this with special cases. And how would a user be any less confused by ending up at the topmost screen of the app? Other than that, solid recommendations.
Guidelines for how to design apps for Android while maintaining a consistent look and feel, and a predictable behavior. One of the things that are great about iOS is the Human Interface Guidelines, so this is definitely a good step in the right direction for Android.
Stunning replacement icon for Sublime Text 2, the stunning and powerful text editor (available on Windows, Mac and Linux).
- Markdown documents found in
/Apps/scriptogram/posts are synchronized and converted into blog posts.
- Only the scriptogram folder is accessible to scriptogr.am
- Custom domains can be used for your blog
- Custom CSS can be used to personalize the appearance.
When I use an app on my phone expect it to work whether I’m online or offline. Look at the standard apps on your phone – email client, text messages, notes apps. They all work when you have no mobile or wifi signal. Sure, you can’t send or recieve new mail, but you can read your old mail – which is a hugely useful feature, and in fact I’d say a vital feature. Say you’re trying to find where you agreed to meet someone when you’re out, and have no mobile signal. No problem, your phone still has that email or text message with the venue stored somewhere.
Now try use the Facebook app offline – it’ll just display a sad smiley and say it couldn’t reach the Internet, and we should try again. I hope your hypothetical friend that you arranged to meet via Facebook likes waiting.
Last time I checked, all the principle mobile platforms had this crazy thing we like to call “storage” where you can, get this, store things so they’re available later. It’s not hard – we’re been storing data for quite a while now. But in the rush to make their app more flexible when they want to change how Facebook works, this means all the content is formatted on their servers and sent down each time you want to display it. So when I’m in the middle of nowhere, and want to check where it was my friend said we were going to meet, it’s like being back in the dark ages. I can’t get access to information I already accessed once on this device.
I totally agree with Michael Winston Dales. There has been a regression in user experience for some time. Facebook and Google+ are serious offenders in this regard, something I’ve covered earlier (Facebook here and Google+ here). You provide a good user experience by providing a good native app, which looks and behaves like other apps on the platform. User experience is about how something feels to use, not the amount of features it delivers.
Then there is the minor detail of not being able to access content offline, which is just ridiculous.
Jon Mitchell pretty much nails it when explaining what it is that is so great about Path:
Path is not conducive to networking or discovering people. Twitter and Facebook are great for that. Google+ can dump thousands of new people on you without even asking. We don’t need another place to network. What we need is a place for intimacy and trust that is still enhanced by the sharing power of the mobile Web.This is exactly how I feel about it, and why I try to get my closest friends to start using it. It isn’t a place to showcase yourself, dump links to funny videos etc. It is about sharing your life with your closest friends; your thoughts, your memories, the stuff that you feel confident in sharing with your inner circle. This becomes quite obvious when opting to share on Facebook as well: what you just wrote isn’t all that personal, so it’s ok for everyone to see. And this works both ways: you keep your personal stuff in, and you keep the noise out:
You also can’t link to the Web from Path. URLs don’t work. That’s an intentional decision by the Path team, and a bold one. On all the everything-networks, linking to the Web is part of the experience. Google+ may suck at it, Facebook may kidnap your links and keep them inside its walls, and Twitter may butcher your URLs, but, in their weird ways, they let you bring in all the signal and noise of the Web. Path does not.