Thursday, October 19, 2017

San Francisco - Day 5

This morning was a cool 50°F but I didn't need to be to the conference hotel until 9. I picked up a cream cheese & tomato bagel on the way. It's a great combination. This was the view from the meeting room looking towards the bay and it may be the best view I've had this whole week. My 7th story room looks out into an alley at a brick wall. Today (Thursday) is also the last day of the meeting. I head home tomorrow morning.

Here is a picture of the group that is meeting for most of the day. The group is made up of network engineers from either research universities or regional networks. In some cases, it's the same individual.

This picture would be a good Apple advertisement as the room is about 80% Mac, two Microsoft tablets and most of the remaining PCs are running some Linux variant. I believe out of the thirty six participants, two are running Windows. It would also be a great example of why STEM education initiatives are so important. Out of the 36 participants, only 3 are women.

Just in case anyone is interested, here is the preliminary list of topics...


Discussion Topics and Notes
  • Next Generation I2 Infrastructure
    • Equipment
    • End-to-end
    • Flexible Edge
  • How can I2 improve cloud connectivity
    • What does the community need
  • SDN for real, not just as a plaything.
    • Intentionally provocative language for the title. Most of what I see in terms of SDN is point solutions (often around the Science DMZ) that seem more oriented towards solving the problem of not having SDN. That’s fine at small scale, and it adds spice to the job, but is it really scalable? What does real SDN in a production, campus network look like? Probably not what has been done to date.
  • Automation
    • Ansible and Salt
    • Docker, Docker Swarm, Kubernetes
  • IPv6
    • State of IPv6 implementation on campuses across US - SLAAC vs DHCPv6
    • Device registration portal with IPv6 support - does such a thing exist?
    • Challenges in tying IPv4 and IPv6 addresses to a user without .1x
    • Experiences with NAT64 or some variant
  • IPv4 - Buying more space vs. NAT?
  • Data Center Networking:
    • Virtualization of the networking
    • NSX? Contrail? ACI?
    • EVPN? MPLS? VXLAN?
    • Extending to the Cloud
  • Wireless
    • 5ghz only SSID?
    • Device Registration & Fingerprinting
    • Service Assurance
  • On-campus speed test/self network diagnostic tool solution
  • On-campus CDN installs
  • Security topics
    • DNS RPZ feeds
    • Border/Edge Firewalls
    • IDS/IPS in-band/out-of-band
    • NAC & Client Posture Assessment
  • Traffic flow and pcap monitoring tools
  • R&E Connectivity
  • Cooperative grant funding projects and ideas

Wednesday, October 18, 2017

San Francisco - Uh, the Next Day

With the BlogPad Pro iPad app, you have to give a new post a title before you can add any other content. I was here before 7am for a breakfast meeting. The AV person setting up the projector for this one meeting scheduled for the room made a comment along the lines of “I hope someone puts up da** PowerPoint”. Shortly thereafter, one of the participants connected his laptop with a picture of Hoover Dam and the Three Gorges Dam. That’s the nature of this group.

One of the real topics discussed was the “why” of SDN (Software Defined Networking). After all, most of the time the data being shipped was IP based. We speculated that the problem to be solved was to be to bypass the campus security device i.e. firewall. It would also bypass all of the L3 devices along the path. This was the original justification as this used to be the bottleneck. Not the case anymore.

Tomorrow, there is an all day meeting where a group of network engineers get together to discuss various topics. As you may expect from a group such as this, there is no pre-announced agenda. Sort of done on the fly. Most of the group went out this evening to a restaurant near the hotel. Even though the restaurant promised that they would be willing to split the check into smaller chunks, that's not what they did. The end of the table I was on had folks from North Dakota, North Carolina, Texas, Michigan and Guam. Quite the diverse group. (BTW, that's not who is in the picture)


Tuesday, October 17, 2017

San Francisco - Day 3

Ready for the day. Coffee, iPad w/keyboard. Today’s focus seems to primarily be security with the first couple of talks on DDOS mitigation. Then SDN (Software Defined Networks), a topic that I found fascinating but not much of an opportunity to try it out. Now, it looks like it may be ready for actual limited pseudo-production deployment.

It’s kind of interesting to see all of the new faces here at the meeting. It means that the organization is healthy and growing. I don’t know what the actual attendance is but I’m seeing a lot of “First Time Attendee” tags on the name badges.

This is a lunch meeting of the Performance Working Group. Maybe a couple dozen people. Some of the tools that the group had been relying on are no longer being supported so things are changing even though the net result rarely changes. Over the years, we’ve had workshops in Alaska from several of these working groups but generally there wasn’t a lot of interest within the university community. IPv4 multicast once, IPv6 twice, and Performance once. 

One of the topics was the deployment of small devices running PerfSonar (the name used for the toolset by the Performance Working Group). Different campuses have used small x86 PC’s and Raspberry Pi’s. The main advantage of the x86 devices was 1Gb/s interface versus 100 Mb/s. I used to run PerfSonar nodes in Fairbanks and Barrow. The Fairbanks installation was limited by the lack of 1Gb/s port for the server. Probably no longer an issue but I have the server turned off. The Barrow server is turned off right now since I’m not up there often enough to keep the node updated. It was useful for local network testing and verifying the throughput on the satellite link. Testing does impact user throughput so it was not done on a regular basis.

This was followed by an afternoon of talks, a vendor reception, and a connectors meeting. By 7 I was pretty exhausted and passed on several dinner invites and headed for the hotel. This 24 hour diner is right on the corner next to the hotel and has decent food. I stopped for a quick snack on the way.

Monday, October 16, 2017

San Francisco - Day 2

At most of the Internet2 meetings, there is usually some sort of performance component. Partly to highlight one of the traditional features of a high performance network. Specifically, no traffic delays. In this case, they were demonstrating a masters class from the SFJAZZ with the instructors and students in three locations.

In this case, the three locations were pretty close together geographically though the music was less tolerant to delays compared to other demonstrations. The two instructors were on the stage in San Francisco, the student was at Stanford, and his musical accompaniment was down the road in San Francisco at the SFJAZZ theatre. They were using LOLA a low-latency audio/video encoder which has a latency of about 5 ms. To this you would add in the round trip time over the network. For modest distances, this would be tolerable for the individual performers to hear each other. In the past, music was selected that would tolerate modest delays such as modern classical.

After the performance, there was a reception then I walked back to my hotel. On the way, I stopped at the newish Apple Store Union Square. I was a bit underwhelmed as it seems to be less than I expected. Two floors of high tables with their products available to play with. Plus there were quite a few places for potential customers to just use the wi-fi. Maybe while they wait for appointments.