Skip navigation

Someone asked if they could see a recording of my IPv6 presentations. No, not really, but I’d be happy to do a panel via Google+ Hangout. My format is very amenable to it.

My format

For each of $(topic meander):

  1. Ask some probing/polling questions. What’s the lowest knowledge level in the audience?
  2. If it seems that there might be some foundational knowledge missing, go back to (1), but ask questions probing deeper into the foundational area.
  3. Give a brief explanation of concepts to get everyone familiar with a base level of terminology and concepts.
  4. Explain how IPv6 extends, relates or changes what people are already familiar with.
The more times I hit (2), the more frequently I get audience participation and assistance in (3). That’s nice, because audience members usually know each other better than I know them, and can sometimes couch things in better terms.
Topics

I start a presentation with most of the room not even knowing what ARP is, and end in an hour with everyone having at least connecting knowledge of:

  • Ethernet segments
  • ARP, ARP storms, NDP
  • broadcast, multicast
  • IP, UDP, TCP
  • netmasks, CIDR
  • the OSI network model (in particular, layers 1-3), dual/multistacking
  • address scopes
  • proxies/application-layer gateways, IPv4/IPv6 transition mechanisms
  • IPv6 certification and practical experience
  • IPv4 exhaustion
  • NAT in IPv4, NAT in IPv6
  • What a /64 is, and why it’s important.
  • Router Announcements, DHCP in IPv4 vs IPv6.
  • And other stuff I don’t remember off the top of my head; I address topics as they come up in relation or proximity to where conversation’s currently gone.

One or two of those items might get lost, depending on how frequently I have to cover the same ground. But if someone takes notes on terminology and concepts, they should have everything they need to fill in the gaps.

I want a mechanics-aware NPC tracker and database for my Pathfinder campaign.

  • It should allow me to enter my existing set of available NPCs.
  • When I start (or plan) a combat encounter, I want to tell it, “clone this NPC that many times”, and have it give each one a new name. This way, I’ll be able to keep track of how many (and which) NPCs the PCs have killed, so I can tack on things like revenge plots later.
  • I want to be able to search the NPC database for NPCs within a certain range of CR, for goblinoid NPCs, for arcane spellcasters and for melee types.
  • I want to be able to edit the database via a web interface, and perform at-session operations on my Xoom or phone.
  • I want to be able to print off batches of NPCs, getting a good, compact stat block. Online access to my Google Docs NPC library is nice, but switching back and forth between documents in the middle of combat? Not so nice.
  • I want to be able to share sets of NPCs with other users; almost all of my NPCs have been generated for me by my players.
  • I want this thing to work well on Linux, Windows and on my Android devices.

I’d be willing to pay for the thing, even.

The task:

I wanted to make a copy of a DCIM folder, copying it from my Kaylee, my desktop machine to Inara, my HTPC. While it’s not the fastest approach, SSH is reasonably quick, unattended, especially if you tune a few parameters in sshd_config, such as prioritizing block ciphers, and in ~/.ssh/config, such as disabling compression when you know the data’s going to be moving across a local gigabit network.

Now, I rather like Midnight Commander. If you grew up on DOS appreciating Norton Commander, you’d probably like it, too. Midnight Commander supports file transfer and browsing over SSH, complete with progress meters and the like. That makes it a generally handy tool for this kind of thing.

The error:

So I ssh’d to my desktop box from a screen session on my HTPC, launched mc, and told it to connect to Kaylee. Then I selected a relatively small folder, DCIM. This particular DCIM directory was only two gigabytes or so.  This was after an earlier aborted attempt to copy some files over; I had to run a quick errand. So when it prompted me whether or not I wanted to overwrite, I told it, All.

I wasn’t happy with my transfer rate, so I aborted, deleted my target directory, and went to try again.

Did you see what I did wrong there?

I connected to Kaylee, launched mc, told mc to connect to Kaylee, and then told it to copy a folder to itself, via SSH link. So I was, file by file, overwriting every file in that DCIM folder. And then I aborted halfway through a transfer. And then I compounded the error by deleting the file. So I deleted 2GB of photos.

The recovery:

A quick search on DuckDuckGo for ext4 undelete turned up extundelete, which turns out to be exactly the tool I needed. One minor rub: the tool requires the filesystem to be unmounted.

So, after killing all the processes using files on the filesystem my home directories happen to sit on, I ran

mount -t ext4 -o remount,ro /dev/device-node /path/to/mount/point

to remount the filesystem read-only. Then I ran the tool. It worked flawlessly, and I got my photos back.

So I have a scenario where this guy’s question might apply. He’s looking to pack resources in with a .lib file, but that simply won’t work, as the file format doesn’t allow for it.

Thing is, my .lib is compiled in the same build sequence as the application it’s supposed to be pulled into, so perhaps I have a little more leeway. A couple things I’ve discovered:

  • It doesn’t matter if you completely hose the syntax inside of one of a .rc file’s TEXTINCLUDE sections. Failures in there will not propagate out and indicate a build failure; your app will compile fine, but you won’t have any resources.
  • Similarly, if you mess up the path to a .rc file you’d like to include, again, build errors won’t propagate out; you’ll just be missing the resources from that file.

Both of these mean that Roger Lipscombe’s 3rd solution, and jeff_t’s elucidation of that approach are very error prone. Even if you get it to work, it’ll likely take you a while to get it right, and you won’t know until you inspect the contents of of your resulting .DLL or .EXE module to see if the resources you’re looking for made it in. Maybe you can do that in a post-build step, but now we’re layering on more complexity, so we look somewhere else.

To take a different tact, you could add the path to your .res file to your main project’s linker options’ additional include directories, and then add your .res file to your inputs for that linker instance. In the past, I’ve found that approach to be error prone; if you have a bunch of project configurations, you need to make sure that that’s applied across all of them. That’s an additional step, which is an additional opportunity for error, and that particular error has bit me many times in the past. So we look somewhere else.

Another approach might be to try using #pragma comment(lib, “library.res”) and #pragma comment(linker, “/INCLUDE:path/to/library/”), surrounded by an IFDEF guard, in one of the headers that gets pulled into your main project. Except while the above approach works great with your .lib file, and while you can put .lib and .res files next to each other under inputs in your project’s linker settings, the above approach won’t work with your .res file. You get another silent error.

There’s really no good solution to pulling in .res files; they don’t have semantic parity with the same features and mechanisms that can pull in .lib files, despite some superficial similarities in their linking semantics. And everything that doesn’t work is a silent failure, until your program fails to find a resource at run time.

No. Maybe you’ll be lucky, and you’ll hit on the right combination of keywords to find a useful result. Or maybe you’ll be given a different personalized view of the search engine’s knowledge space. Or maybe you’ll give me a link I’ve already seen, but which is irrelevant. Or maybe you’ll find nothing at all, like I did.

In any given two-week period, I expect to have each of those experiences at least once. So don’t be so quickly condescending with your LMGTFYs. I asked in your presence because I hoped you might be an expert.

Right now, the state of things is pretty disheartening. I’ll hit docs first. They’ll probably be incomplete or inscrutably organized. Then I’ll hit search engines. Then I’ll ask.

Then, very probably, I’ll be digging into the source code for your language interpreter or other tool. I dug into the Linux kernel source looking for an answer to a Xen question, yesterday. I’ve had to dig into PHP extensions’ source code around twice a month. When trying to figure out how a Python script works, I’ve had to dig through the code of the modules it calls through.

Right now, forums suck, search engines suck (oddly enough, because forums tend to suck), documentation sucks, and the communities around far too many technologies are far too quick to assume a lack of any prior effort. The classic demonstration of at least three of these is to hit a search engine with $query, and have the most promising result be a forum result where the only respondent says something along the lines of “just search for $query”.

I’m not saying there aren’t parasites in forums and chat rooms, but there tends to be a presumption that the people asking questions are parasites.

Anyway, that’s what people are doing wrong. As for how to do it right? The best answers I’ve seen contain three parts in sequence:

  1. A reference to a specific resource for a better understanding of the specific subject area of the question.
  2. A direct response to the question, regardless of whether or not the answerer thought the asker asked the right question.
  3. A remark that the question represents an unusual problem, that they’re probably doing something wrong, and trying to learn more about the scenario the asker is in, and why they aren’t using a more typical solution.

Typically, I don’t see all three of those parts come from the same person. Usually, parts 1 and 2 come from one place, while part 3 comes from someone else. I’ve often seen askers who got all three of those parts in that order stick around; they got good information, and they’ve found a good resource, a good community. I’ve usually seen those that stick around go on to answer questions posed by others.

Though when someone receives part 3 before parts 1 or 2, they’ll get flustered, and may or may not stick around before they get an answer useful to them. So don’t do it in that order.🙂

So, I gave a presentation in front of the MDLUG this past Saturday. Mostly ad-libbed, but I did put together a list of IPv6 transition mechanisms, assembled from info I found on Wikipedia:

IP-IP tunnels: 4in6, 6in4.

6over4 – similar to SLAAC’s address configuration, but using a host’s IPv4 address rather than their MAC address. The IP address is appended to fe80:0000:0000:0000:0000:0000: Also, ff02:1 becomes 239.192.0.1, and ff02::2 becomes 239.192.0.2. As link-local addresses and link-local multicast then work, other subsequent normal means to configure hosts on IPv6 may then be used.

DS-Lite – Clients only get IPv6 addresses, and a transition mechanism such as NAT or proxies is used to grant clients access to the global IPv4 network.

6rd – An ISP maps all (or some) of the IPv4 address space to a range within its own IPv6 address space, and operates a relay node which behaves similarly to a 6to4 relay node. 6rd does not enable IPv4 nodes to access IPv6 nodes, or vice versa; it allows IPv6 nodes to reach each other using a stateless tunnel over IPv4.

6to4 – A router with an IPv4 address maps appends its address to 2002, and then appends an arbitrary 16-bit value. 6to4 does not enable IPv4 nodes to access IPv6 nodes, or vice versa; it allows IPv6 nodes to reach each other using a stateless tunnel over IPv4.

ISATAP – Operates similarly to 6over4, with three important distinctions: IPv4 multicast is not required, IPv6 multicast is not available, and hosts must be configured with a list of potential routers. (This is often done by querying DNS for isatap.example.com) Furthermore, the IPv4 address is appended to fe80:0000:0000:0000:0000:5efe:

NAT64 – A host/router with both IPv4 and IPv6 connectivity operates as a translator, mapping IPv4 addresses to IPv6 addresses, and translating and routing packets between the two networks.

DNS64 – A DNS resolver which synthesizes AAAA records for IPv4-only hosts, where the AAAA records induce the client to connect through a proxy of some sort—typically a NAT64 router.

Teredo – IPv6 over UDP. Teredo server provides teredo client with IPv6 congfiguration details.

Later, after the meeting, I put together a bullet list of topics and points touched on during the meeting. These are not in the order they were discussed, but rather in the order I remembered them later.

  • IPv6 addresses have 128 bits, as opposed to IPv4’s 32 bits.
  • It is recommended that ISPs give their customers /48s or /56’s. A network operator can subdivide their address range as much as they like for their own needs and purposes.
  • The “All-nodes” IPv6 multicast address is ff02::1
  • The “All routers” IPv6 multicast address is ff02::2
  • Link-local addresses begin with fe80
  • “::” is shorthand for 0s
  • CIDR stands for “Classless Inter-Domain Routing”
  • ULA stands for Unique Local Address, and consists of the address range fec0::/7
  • The 6to4 address range consists of 2002::/32
  • SixXS and Hurricane Electric both provide forums for IPv6 professionals and enthusiasts.
  • SixXS and Hurricane Electric both provide free IPv6-over-IPv4 tunnels, allowing IPv4-only networks to access the global IPv6 network
  • Hurricane Electric also offers a free, practical IPv6 certification program, and free DNS hosting for up to fifty domains for those in the cert program.
  • ‘radvd’ is used for rapid configuration of IPv6 hosts
  • DHCPv6 is used for DHCP configuration of IPv6 hosts.
  • The *recommended* configuration for a network is Dual Stack–both IPv4 and IPv6 connectivity.
  • It will grow more expensive to have a public IPv4 address, and double-NATting of IPv4 addresses will grow more common as the number of hosts on the IPv4 network increases.
  • Transition mechanisms exist to allow IPv6 networks to reach each other using tunnels over IPv4 (and vice versa)
  • Transition mechanisms exist to allow hosts on IPv4 networks and hosts on IPv6 networks to communicate with each other.
  • IPv6 makes use of ethernet-level multicast with Neighbor Discovery, as opposed to IPv4’s use of ethernet-level *broadcast* with ARP.
  • Some hardware is already IPv6-capable. Some hardware can be made IPv6-capable via a firmware update. Some hardware cannot be made IPv6-capable.
  • Windows 9x had IPv6 support by way of Trumpet Winsock. Windows 2000 had very, very beta IPv6 support. Windows XP has IPv6 support which is disabled by default, and use of it on XP is disrecommended. Windows Vista and Windows 7 currently have the best IPv6 support of any desktop or workstation OS, owing to their support of DHCPv6. Linux has good support for SLAAC configuration, but not for DHCPv6.
  • If you must, you can enable IPv6 on Windows XP using the command “netsh int ipv6 install”
  • DNS is a distributed database. DNS servers can contain information about both IPv4 and IPv6 addresses.
  • DNS stores IPv4 address information in A records, and IPv6 address information in AAAA records.
  • DNS stores reverse lookups in PTR records. The record for IPv4 address 127.0.0..1 looks like “1.0.0.127.in-addr.arpa.”, and the record for IPv6 address 2605:2700:0:3::4713:91bf looks like “f.b.1.9.3.1.7.4.0.0.0.0.0.0.0.0.3.0.0.0.0.0.0.0.0.0.7.2.5.0.6.2.ip6.arpa.”
  • On Linux, the ‘ping’ command is for IPv4, and the ‘ping6’ command is for IPv6. On Windows, the ‘ping’ command works with both IPv4 and IPv6.
  • On Linux, you can use either the ‘ifconfig’ or the ‘ip’ command to view and set information relating to IPv6. On Windows, you can view information
  • On Linux, the ‘iptables’ command corresponds to IPv4, and the ‘ip6tables’ command corresponds to IPv6.
  • On Linux, you can place rules in the PREROUTING, FORWARD and POSTROUTING tables to apply firewall rules to your routing.
  • If you must, you can use ULA addresses as part of an IPv6-IPv6 NAT, though some IPv6 network administrators will want to strangle you.
  • The #ipv6 channel on Freenode includes many IPv6 professionals and enthusiasts, and the people there are interested in “teaching you to fish,” not “giving you fish.” Yes, it is an active channel. Yes, the content is usually on-topic. Yes, there is occasionally offtopic content.

And something I should have mentioned, but forgot to:

  • IPv6 links should not be smaller than a /64; without at least a /64, SLAAC (global-scope address autoconfiguration) can’t work.

I’m going to try to present at Penguicon. If anyone wants some in-person IPv6 education, let me know.🙂

< mrjester> In Bind-ese, is this valid?  zone “10.in-addr.arpa” {
< jima> that seems valid enough
< _ruben> it’s not .. unmatched curly bracket
* jima gives _ruben a wedgie
< jima> also, “brace”
* mikemol braces himself. {mikemol}
< jima>😀
< jima> thank you, i half expected someone to go there.
< _ruben> i had my braces removed ages ago
< mrjester> It accepted it..
< mrjester> Now, does it work.
< mikemol> jima: Now brace yourself.
< jima> {jima}
< _ruben> No, it’s collecting welfare instead
< mrjester> sweet.
< mikemol> Fail. It’s {yourself}.
* jima larts mikemol
< mikemol> What, don’t like being punnished?
< jima> you must not know me well.
< mikemol> Well, I know you better than you know {yourself}…
< mrjester> C-C-C-COMBO BREAKER
< mikemol> Game aborted. Break received.
< mikemol> Restarting…
< mikemol> You are in a maze of twisty little passages, all alike.
< mrjester> north
< mikemol> You are in a maze of twisty little passages, all alike.
< mrjester> look up
< mikemol> You see the ceiling.
< mrjester> crush the braces
< mikemol> You can’t do that.
< mikemol> /help for help
< mrjester> lol
< mikemol> /quit to exit.
< mrjester>   /quit
< mrjester>  /bug Game doesn’t support IPv6
< Xenith> /die

A couple of my photos are apparently going to be used in a children’s book in Japan; I’ve been contacted by someone asking for permission to use them for that purpose.

The funny thing is, almost all of my photos on Flickr are CC-By; I merely ask for attribution. I don’t even need to be notified. I set it up this way because I didn’t want to keep taking the time to deal with requests to use my photos…but it doesn’t seem to slow that down any. The CC-By photos get a great deal of inclusion on blogs and such, but I think that’s mostly by people who wouldn’t have bothered with them at all, otherwise.

Here’s the other funny thing…Despite the photos being CC-By, there’s usually some kind of compensation available. Nearly every request I’ve had included either an offer for some explicit number if my photo were chosen out of their working set, or there was a tentative query wanting to know how much I might require.

Strange as it is, CC-By isn’t serving my purpose for making the photos less of a hassle, and it also doesn’t appear to be changing the economics around the use of my photos, either.

Cosplay - AWA14 - Naruto Uzumaki and Kakashi Hatake

Cosplay - AWA14 - Dragonball Z

So you’re running Gentoo, you rebooted, and now your keyboard and mouse (which had worked just fine before) don’t work. Did you do a software update recently? If your copy of Xorg underwent a version bump, you’ll need to recompile your Xorg drivers to update how they talk to your X server.

First step: Get to a terminal, since X11 won’t work right for you until you fix it. If you’re running an SSH server on the machine, that’s easy; just SSH in. If you’re not, then perhaps you have a serial console enabled. (Admittedly unlikely…) You might try SysRq[1] If it comes down to it, you can simply reboot the computer (either tap the power button and let acpid reboot things for you, or give it a hard power off and face the consequences…) and edit your grub boot menu to add “gentoo=nox” to your kernel command line.[2] That’ll at least prevent X from starting on a given bootup.

Second step: Re-emerge all of your x11 drivers. The simplest command I’ve seen for this (to date)[3] is:

emerge -1a $(qlist -I -C x11-drivers/)

There are two commands here. The first that gets run is qlist, and the second is emerge.

emerge, of course, is your usual installation tool. The “-1” parameter tells it to “just build, don’t add this to our world file.” That’s useful if package A is installed as a dependency of package B, and you want to allow package A be automatically removed after package B is removed. The -a parameter tells emerge to ask for confirmation before doing anything.

qlist querie’s Portage’s database and returns lists. It’s found in app-portage/portage-utils, which you probably already have installed. The -I parameter asks it to list installed packages, and the -C parameter asks it to not use color codings. (That’s important, because color codes in the output would confuse emerge.) Finally, the “x11-drivers/” parameter asks it to limit its listing to those packages found under…x11-drivers/. So, it’ll only list installed X11 drivers. Handy.

The $() surrounding the qlist command takes the output of the command it’s run and applies it as part of the argument set to the command currently being processed–in this case, the emerge command.

Once you’ve run the command, you should be able to restart slim, gdm, kdm, nodm, or whatever you were using, or even simply restart your computer. X11 should work properly again after that.

[1] “Hold down Alt, hold down SysRq, press each of the keys in turn. The usual full sequence is R-E-I-S-U-B Reboot Even If System Utterly Broken” Thanks to Dale. Or Neil. Not sure.

[2] Thanks to CJoeB for noting this one.

[3] Thanks to Michael Hampicke for posting this one on the gentoo-user mailing list.

So, Sunday, I got Rosetta Code moved from its old VPS to its new VPS. Three substantive changes:

  • The server is now accessible via IPv6. This was mostly brought about by configuration changes.
  • It’s now running on Debian Lenny, instead of Ubuntu 10.04.
  • The vast majority of the PHP code load has been updated to reflect newer versions of software.

I’m most pleased at the Squid cache’s performance. Out of 139,566 requests on Monday, 95,726 were TCP_MEM_HITs, 15,529 were TCP_MISSes, 10,036 were TCP_IMS_HITs, 7,419 were TCP_HITs, 6,889 were TCP_REFRESH_UNMODIFIED, 2,873 were TCP_CLIENT_REFRESH_MISS and 1,093 were TCP_REFRESH_MODIFIED. All in all, that means roughly 81% of client page requests never got past Squid to Apache+mod_php. 68% of those requests caught were satisfied by data Squid still had cached in RAM.

In short, I’m vastly underutilizing this server. I need to start looking at network throughput data and figure out if I can migrate to a smaller, cheaper VPS. Prgmr.com is planning doubling RAM, CPU and disk offerings at no cost increase, but I don’t know if they’re doubling network quotas  as well.