RIP, EIGRP, OSPF, IS-IS, BGP, MPLS, VTP, STP.
ScottF
Member
Posts:
206
Joined:
Wed Nov 14, 2012 9:41 am
Certs:
CCNA

Nexus Setup

Tue Jun 10, 2014 2:49 pm

Hi all,

Hoping someone can clear something up. We are looking at a buying a few Nexus 5k's for a project and would like to provide resilience. Looking into how we can do this I have read about setting up vPC's. Would the image below work? and if it would do you just set the ports up on the VSS pair like a normal etherchannel? Also I know you don't have much to go on, but is there any obvious cons for doing this or better ways of setting it up.

image001.png
image001.png (48.69 KiB) Viewed 3089 times




Thanks

P.S. Apologies if this has come up before. When I use the search option all the formatting is broken so couldn't find anything

Reggle
Post Whore
Posts:
1956
Joined:
Sun May 15, 2011 4:16 pm
Certs:
CCNA Security, CCNP, CCDP

Re: Nexus Setup

Tue Jun 10, 2014 3:07 pm

Would work on recent code releases and given the server supports LACP.
And yes, for the VSS it would be a normal port-channel.
http://reggle.wordpress.com

ScottF
Member
Posts:
206
Joined:
Wed Nov 14, 2012 9:41 am
Certs:
CCNA

Re: Nexus Setup

Tue Jun 10, 2014 3:10 pm

Thanks for the quick response, just what I wanted to hear :)

footy
Member
Posts:
157
Joined:
Wed May 23, 2012 8:57 am

Re: Nexus Setup

Tue Jun 10, 2014 7:48 pm

You also need the 'VPC keepalive' to go through a switch not part of the VPC
also the server wouldn't need to support LACP 'mode on' is still valid

User avatar
sniper7
New Member
Posts:
13
Joined:
Mon Oct 08, 2012 6:43 pm

Re: Nexus Setup

Tue Jun 10, 2014 9:37 pm

I have a question about this setup. So you would connect servers to the 5ks, which are 10Gbps ports. The connection from the 5k back to the 6509, what would that be, just a 10Gbps too? If you have that Nexus 5k fully loaded, won't you see some performance issues? I guess this comes into having oversubscribed ports, and the fact that not all ports will move traffic simultaneously. If it did, I guess it would be the network engineers responsibility to account for that and provide the necessary bandwidth needed. I've just always wondered how you can push that traffic through the uplink port, that's equivalent to the port to each server. Curious about this information.

Reggle
Post Whore
Posts:
1956
Joined:
Sun May 15, 2011 4:16 pm
Certs:
CCNA Security, CCNP, CCDP

Re: Nexus Setup

Wed Jun 11, 2014 2:08 am

footy wrote:also the server wouldn't need to support LACP 'mode on' is still valid

True, but I don't recommend it.
http://reggle.wordpress.com

ScottF
Member
Posts:
206
Joined:
Wed Nov 14, 2012 9:41 am
Certs:
CCNA

Re: Nexus Setup

Wed Jun 11, 2014 3:28 am

sniper7 wrote:I have a question about this setup. So you would connect servers to the 5ks, which are 10Gbps ports. The connection from the 5k back to the 6509, what would that be, just a 10Gbps too? If you have that Nexus 5k fully loaded, won't you see some performance issues? I guess this comes into having oversubscribed ports, and the fact that not all ports will move traffic simultaneously. If it did, I guess it would be the network engineers responsibility to account for that and provide the necessary bandwidth needed. I've just always wondered how you can push that traffic through the uplink port, that's equivalent to the port to each server. Curious about this information.


Connections back to the 6509's will probably be 40Gbps 2*10Gb per switch in a vPC.

It's the server guys responsibility to give an idea of what kind of performance they need and the network engineers responsibility to provide a device that suits and then monitor that device to make sure it's all good.

User avatar
that1guy15
Post Whore
Posts:
3224
Joined:
Thu Apr 29, 2010 6:12 pm
Certs:
CCNP, CCDP, CCIP

Re: Nexus Setup

Wed Jun 11, 2014 8:38 am

ScottF wrote:
sniper7 wrote:I have a question about this setup. So you would connect servers to the 5ks, which are 10Gbps ports. The connection from the 5k back to the 6509, what would that be, just a 10Gbps too? If you have that Nexus 5k fully loaded, won't you see some performance issues? I guess this comes into having oversubscribed ports, and the fact that not all ports will move traffic simultaneously. If it did, I guess it would be the network engineers responsibility to account for that and provide the necessary bandwidth needed. I've just always wondered how you can push that traffic through the uplink port, that's equivalent to the port to each server. Curious about this information.


Connections back to the 6509's will probably be 40Gbps 2*10Gb per switch in a vPC.

It's the server guys responsibility to give an idea of what kind of performance they need and the network engineers responsibility to provide a device that suits and then monitor that device to make sure it's all good.


sniper7 brings up good points. Just make sure and watch out because your 6500 VSS is going to be the bottleneck here especially if you are running some of the older 10G line cards.

Also make sure you keep your L3 boundary at the VSS pair and not route over the vPC - VSS connection.
http://blog.movingonesandzeros.net/

Otanx
Post Whore
Posts:
1261
Joined:
Wed Sep 01, 2010 3:37 pm
Certs:
CCNP, CEH

Re: Nexus Setup

Wed Jun 11, 2014 9:14 am

We have a similar setup running right now so yes it will work. As for bandwidth constraints it all depends on your traffic profiles. For us the 5Ks are supporting a HADOOP cluster. It has alot of east/west traffic which never leaves the 5Ks. During testing we saw almost 30G sustained between all the nodes. The north/south traffic is very minimal. For our connection we actually only have 4G (4x1G) between the 5Ks and 6509s. Eventually this will move to 10G, but the 4G is working very well right now.

-Otanx
Stay networked, my friends.

ScottF
Member
Posts:
206
Joined:
Wed Nov 14, 2012 9:41 am
Certs:
CCNA

Re: Nexus Setup

Wed Jun 11, 2014 9:58 am

that1guy15 wrote:
ScottF wrote:
sniper7 wrote:I have a question about this setup. So you would connect servers to the 5ks, which are 10Gbps ports. The connection from the 5k back to the 6509, what would that be, just a 10Gbps too? If you have that Nexus 5k fully loaded, won't you see some performance issues? I guess this comes into having oversubscribed ports, and the fact that not all ports will move traffic simultaneously. If it did, I guess it would be the network engineers responsibility to account for that and provide the necessary bandwidth needed. I've just always wondered how you can push that traffic through the uplink port, that's equivalent to the port to each server. Curious about this information.


Connections back to the 6509's will probably be 40Gbps 2*10Gb per switch in a vPC.

It's the server guys responsibility to give an idea of what kind of performance they need and the network engineers responsibility to provide a device that suits and then monitor that device to make sure it's all good.


sniper7 brings up good points. Just make sure and watch out because your 6500 VSS is going to be the bottleneck here especially if you are running some of the older 10G line cards.

Also make sure you keep your L3 boundary at the VSS pair and not route over the vPC - VSS connection.


I think the plan is to swap out the VSS for new kit in the near future so that should be ok, a large majority of our traffic will be east-west as well, and the links will certainly be staying L2.

Thanks for all you input people

User avatar
that1guy15
Post Whore
Posts:
3224
Joined:
Thu Apr 29, 2010 6:12 pm
Certs:
CCNP, CCDP, CCIP

Re: Nexus Setup

Wed Jun 11, 2014 10:10 am

Then why not consider collapsing the VSS and 5ks into a pair of 6Ks or newer 5Ks?
Both lines have L3 integrated unlike the older 5500s and should be able to handle your load at line rate. Plus if your heavy on east-west this would simplify your design and lower the risk of over subscription.

Just a thought. Check out the numbers to make sure they fit with what Im saying.

IMO VSS has no place in the DC anymore.
http://blog.movingonesandzeros.net/

User avatar
williamtyrell78
Post Whore
Posts:
1388
Joined:
Tue Mar 12, 2013 3:58 pm
Certs:
CompTIA Net+, CCENT, CCNA R&S

Re: Nexus Setup

Wed Jun 11, 2014 11:27 am

that1guy15 wrote:Then why not consider collapsing the VSS and 5ks into a pair of 6Ks or newer 5Ks?
Both lines have L3 integrated unlike the older 5500s and should be able to handle your load at line rate. Plus if your heavy on east-west this would simplify your design and lower the risk of over subscription.

Just a thought. Check out the numbers to make sure they fit with what Im saying.

IMO VSS has no place in the DC anymore.


I second this. Collapse it, get newer equipment, and simplify your architecture. Well said. +1+1
"I can't have a network loop, I have Spanning-Tree" ....famous last words

ScottF
Member
Posts:
206
Joined:
Wed Nov 14, 2012 9:41 am
Certs:
CCNA

Re: Nexus Setup

Thu Jun 12, 2014 3:42 am

that1guy15 wrote:Then why not consider collapsing the VSS and 5ks into a pair of 6Ks or newer 5Ks?
Both lines have L3 integrated unlike the older 5500s and should be able to handle your load at line rate. Plus if your heavy on east-west this would simplify your design and lower the risk of over subscription.

Just a thought. Check out the numbers to make sure they fit with what Im saying.

IMO VSS has no place in the DC anymore.



The design decisions are made above my head, however the VSS does more than just serve the datacentre so consolidating it may not be an option.

ScottF
Member
Posts:
206
Joined:
Wed Nov 14, 2012 9:41 am
Certs:
CCNA

Re: Nexus Setup

Mon Jul 14, 2014 11:16 am

footy wrote:You also need the 'VPC keepalive' to go through a switch not part of the VPC
also the server wouldn't need to support LACP 'mode on' is still valid



apologies for bringing up an old post, but our nexus are due next week and I was re-reading this post. When I first read this statement I thought it said the keepalive needed to go though a switchport not part of the vPC, but it actually reads 'switch'.


Can you just connect the 2 5k's together(like i have for the peer link) and send the keepalive over that link or do they actually have to go to another switch?

Thanks

Edit - So reading into the keepalive, it seems to use the management address of the other switch in the vPC domain. So as long as the two switches can ping each others management address this will be ok?

User avatar
that1guy15
Post Whore
Posts:
3224
Joined:
Thu Apr 29, 2010 6:12 pm
Certs:
CCNP, CCDP, CCIP

Re: Nexus Setup

Mon Jul 14, 2014 12:06 pm

Yeah it is possible but not advisable. Just use the mgmt interfaces to directly connect the 5Ks and you will save yourself a lot of pain in failure scenarios. My advice for stability in failure situations is:

1) run peer-link over at least two interfaces in a port-channel. If the peer-link fails you are going to have a bad morning.
2) run your peer-keepalive between mgmt interface directly. If keepalive fails you will have a very bad day/week.
http://blog.movingonesandzeros.net/

User avatar
mlan
Ultimate Member
Posts:
819
Joined:
Thu Nov 17, 2011 6:09 pm

Re: Nexus Setup

Mon Jul 14, 2014 12:22 pm

that1guy15 wrote:Yeah it is possible but not advisable. Just use the mgmt interfaces to directly connect the 5Ks and you will save yourself a lot of pain in failure scenarios. My advice for stability in failure situations is:

1) run peer-link over at least two interfaces in a port-channel. If the peer-link fails you are going to have a bad morning.
2) run your peer-keepalive between mgmt interface directly. If keepalive fails you will have a very bad day/week.


That is good advice. You don't want to burn two of your 10GbE ports just for the peer-keepalive, so combine that functionality with your mgmt interfaces. This is typically why you want your mgmt interfaces on a switch, so you also have IP connectivity for management functions which are provided on a separate VRF instance.

User avatar
that1guy15
Post Whore
Posts:
3224
Joined:
Thu Apr 29, 2010 6:12 pm
Certs:
CCNP, CCDP, CCIP

Re: Nexus Setup

Mon Jul 14, 2014 12:31 pm

I actually say a step further. Keep the keepalive link 100% OOB by running a patch cable directly between the two.

Usually a DCs OOB or mgmt network is ran on lower quality gear or not considered mission critical. But your peer keepalive is. One Cat6 cable between the two mgmt interfaces removes all other points of failure in the equation.
http://blog.movingonesandzeros.net/

Reggle
Post Whore
Posts:
1956
Joined:
Sun May 15, 2011 4:16 pm
Certs:
CCNA Security, CCNP, CCDP

Re: Nexus Setup

Mon Jul 14, 2014 12:52 pm

that1guy15 wrote:I actually say a step further. Keep the keepalive link 100% OOB by running a patch cable directly between the two.

Usually a DCs OOB or mgmt network is ran on lower quality gear or not considered mission critical. But your peer keepalive is. One Cat6 cable between the two mgmt interfaces removes all other points of failure in the equation.

Maybe I'm imagining this the wrong way but... How do you manage that vPC pair? Given no layer 3 modules.
http://reggle.wordpress.com

User avatar
mlan
Ultimate Member
Posts:
819
Joined:
Thu Nov 17, 2011 6:09 pm

Re: Nexus Setup

Mon Jul 14, 2014 1:46 pm

Reggle wrote:Maybe I'm imagining this the wrong way but... How do you manage that vPC pair? Given no layer 3 modules.


http://www.networking-forum.com/viewtopic.php?t=23483

that1guy15, good point on the oob management infx.

ScottF
Member
Posts:
206
Joined:
Wed Nov 14, 2012 9:41 am
Certs:
CCNA

Re: Nexus Setup

Mon Jul 14, 2014 2:25 pm

Thanks a lot for all the info. You'll have to excuse my lack of knowledge but never used the mgmt ports.

so give the two mgmt0 port ip addresses and set the keep alive to use them. use physical cable to connect the switches mgmt0 ports.

Is there anything to stop me then setting up a mgmt SVI on our management vlan so we can connect to them?

Thanks

Next

Return to Cisco Routing and Switching

Who is online

Users browsing this forum: No registered users and 117 guests