RIP, EIGRP, OSPF, IS-IS, BGP, MPLS, VTP, STP.
User avatar
that1guy15
Post Whore
Posts:
3224
Joined:
Thu Apr 29, 2010 6:12 pm
Certs:
CCNP, CCDP, CCIP

Re: Nexus Setup

Mon Jul 14, 2014 4:13 pm

Reggle wrote:
that1guy15 wrote:I actually say a step further. Keep the keepalive link 100% OOB by running a patch cable directly between the two.

Usually a DCs OOB or mgmt network is ran on lower quality gear or not considered mission critical. But your peer keepalive is. One Cat6 cable between the two mgmt interfaces removes all other points of failure in the equation.

Maybe I'm imagining this the wrong way but... How do you manage that vPC pair? Given no layer 3 modules.


Good catch and one I glazed over (aka spaced). You are right. If the mgmt0 interface is a dedicated PTP link then you lose access to the mgmt vrf and thus your only option for management of a L2-only 5K is via console.

My first Nexus setup had L3 cards in the 5K so I just dropped a mgmt vlan on the default vrf. I jumped into my current setup to see how we are running and I do the same on both 5Ks and 7Ks. The mgmt ports drop into a mgmt switch like the 7Ks require.

I should also clarify. The peer-keep alive has an important job of keeping the vPC pair from going split brain but losing only the keepalive link will not cause an outage. Shit only hits the fan when you lose both the peer-link and keepalive link together.
http://blog.movingonesandzeros.net/

User avatar
that1guy15
Post Whore
Posts:
3224
Joined:
Thu Apr 29, 2010 6:12 pm
Certs:
CCNP, CCDP, CCIP

Re: Nexus Setup

Mon Jul 14, 2014 4:19 pm

ScottF wrote:Thanks a lot for all the info. You'll have to excuse my lack of knowledge but never used the mgmt ports.

so give the two mgmt0 port ip addresses and set the keep alive to use them. use physical cable to connect the switches mgmt0 ports.

Is there anything to stop me then setting up a mgmt SVI on our management vlan so we can connect to them?

Thanks


Peer keepalive is done from the vpc domain section
Code: Select all
vpc 12
 peer-keepalive destination 10.10.10.14 source 10.10.10.13 vrf managment


To configure the mgmt interface is
Code: Select all
interface mgmt0
  description VPC_Peer_Keepalive
  ip address 10.10.10.14/24


mgmt0 defaults to the management vrf .
http://blog.movingonesandzeros.net/

ScottF
Member
Posts:
206
Joined:
Wed Nov 14, 2012 9:41 am
Certs:
CCNA

Re: Nexus Setup

Mon Jul 14, 2014 5:18 pm

Can you then access the switches remotely via those addresses, or I'm guessing they are only on the mgmt vrf so I won't be able to.

If I can't ssh to them, can I still setup an SVI on the management vlan to ssh to, or would this require an L3 module as you talked about on your other post.

User avatar
mlan
Ultimate Member
Posts:
819
Joined:
Thu Nov 17, 2011 6:09 pm

Re: Nexus Setup

Mon Jul 14, 2014 6:02 pm

ScottF wrote:Is there anything to stop me then setting up a mgmt SVI on our management vlan so we can connect to them?


Nothing but the
Code: Select all
feature interface-vlan
command being required.

User avatar
that1guy15
Post Whore
Posts:
3224
Joined:
Thu Apr 29, 2010 6:12 pm
Certs:
CCNP, CCDP, CCIP

Re: Nexus Setup

Tue Jul 15, 2014 8:24 am

mlan wrote:
ScottF wrote:Is there anything to stop me then setting up a mgmt SVI on our management vlan so we can connect to them?


Nothing but the
Code: Select all
feature interface-vlan
command being required.


I was under the impression this would not allow you to have both mgmt0 with an ip and another SVI even if they are in separate VRFs. Am I off on that?

OP: if you have the 5ks to test with first, fire it up and see if it works.
http://blog.movingonesandzeros.net/

ScottF
Member
Posts:
206
Joined:
Wed Nov 14, 2012 9:41 am
Certs:
CCNA

Re: Nexus Setup

Tue Jul 15, 2014 10:23 am

Should be here start of next week so I'll be giving that a go.

Thanks

User avatar
Vito_Corleone
Moderator
Posts:
9850
Joined:
Mon Apr 07, 2008 10:38 am
Certs:
CCNP RS, CCNP DC, CCDP, CCIP

Re: Nexus Setup

Wed Jul 16, 2014 3:36 pm

You can have an SVI up for management. It's just like a Catalyst switch. The only reason (that I've ever seen) to use the dedicated management ports for management is DCNM. I'm not sure if it's changed, but a couple years ago you were only able to manage a device in DCNM using the management ports.
http://blog.alwaysthenetwork.com

User avatar
mlan
Ultimate Member
Posts:
819
Joined:
Thu Nov 17, 2011 6:09 pm

Re: Nexus Setup

Thu Jul 17, 2014 11:42 am

Vito_Corleone wrote:You can have an SVI up for management. It's just like a Catalyst switch. The only reason (that I've ever seen) to use the dedicated management ports for management is DCNM. I'm not sure if it's changed, but a couple years ago you were only able to manage a device in DCNM using the management ports.


That is an interesting limitation, and we are using DCNM in my org on the management ports. Vito, how often do you see people using 10GbE ports for the peer keepalive link? It seems like an expensive allocation when budget is tight for port density.

User avatar
Vito_Corleone
Moderator
Posts:
9850
Joined:
Mon Apr 07, 2008 10:38 am
Certs:
CCNP RS, CCNP DC, CCDP, CCIP

Re: Nexus Setup

Sat Jul 19, 2014 1:05 pm

mlan wrote:
Vito_Corleone wrote:You can have an SVI up for management. It's just like a Catalyst switch. The only reason (that I've ever seen) to use the dedicated management ports for management is DCNM. I'm not sure if it's changed, but a couple years ago you were only able to manage a device in DCNM using the management ports.


That is an interesting limitation, and we are using DCNM in my org on the management ports. Vito, how often do you see people using 10GbE ports for the peer keepalive link? It seems like an expensive allocation when budget is tight for port density.


Never. You still use the management ports, but you put a switch between them and make them accessible to the network/DCNM. Like you said, burning 10G ports for the keepalive is expensive.
http://blog.alwaysthenetwork.com

AnthonyC
Junior Member
Posts:
63
Joined:
Mon Apr 29, 2013 10:35 am

Re: Nexus Setup

Sun Jul 20, 2014 10:35 pm

Vito_Corleone wrote:You can have an SVI up for management. It's just like a Catalyst switch. The only reason (that I've ever seen) to use the dedicated management ports for management is DCNM. I'm not sure if it's changed, but a couple years ago you were only able to manage a device in DCNM using the management ports.


I don't think that's a limitation in DCNM anymore (for DCNM 6.x anyway), since you can just specify how DCNM access the switch via SNMP and you can specify which VRF. The only exception is for the MDS 9000 SAN switch, if you happen to have them in your environment.

With that being said, it is definitely much better to make use of the management port and to have a dedicated OOB access to your core switch.

User avatar
Vito_Corleone
Moderator
Posts:
9850
Joined:
Mon Apr 07, 2008 10:38 am
Certs:
CCNP RS, CCNP DC, CCDP, CCIP

Re: Nexus Setup

Mon Jul 21, 2014 2:00 pm

AnthonyC wrote:With that being said, it is definitely much better to make use of the management port and to have a dedicated OOB access to your core switch.


Why? If the core is down the OOB network is likely down too, unless OOB doesn't pass through the core at all, and you have completely separate infrastructure.
http://blog.alwaysthenetwork.com

AnthonyC
Junior Member
Posts:
63
Joined:
Mon Apr 29, 2013 10:35 am

Re: Nexus Setup

Mon Jul 21, 2014 2:46 pm

Vito_Corleone wrote:
AnthonyC wrote:With that being said, it is definitely much better to make use of the management port and to have a dedicated OOB access to your core switch.


Why? If the core is down the OOB network is likely down too, unless OOB doesn't pass through the core at all, and you have completely separate infrastructure.


A truly dedicated OOB should not touch the core infrastructure at all and should be built on separated infrastructure. I think most good Colo would have separated OOB infrastructure that you can build off on for the OOB network. This will come in handy the day/night a network change goes wrong and you lost access to the core.

Also in the old Sup1 on the N7K, they even have the CMP for OOB.

footy
Member
Posts:
157
Joined:
Wed May 23, 2012 8:57 am

Re: Nexus Setup

Mon Jul 21, 2014 2:49 pm

Vito_Corleone wrote:
AnthonyC wrote:With that being said, it is definitely much better to make use of the management port and to have a dedicated OOB access to your core switch.


Why? If the core is down the OOB network is likely down too, unless OOB doesn't pass through the core at all, and you have completely separate infrastructure.

Vito, doesn't every network have a dedicated isolated OOB management network?
:dance: :dance: :dance: :dance: :dance:

User avatar
Vito_Corleone
Moderator
Posts:
9850
Joined:
Mon Apr 07, 2008 10:38 am
Certs:
CCNP RS, CCNP DC, CCDP, CCIP

Re: Nexus Setup

Mon Jul 21, 2014 4:50 pm

AnthonyC wrote:A truly dedicated OOB should not touch the core infrastructure at all and should be built on separated infrastructure. I think most good Colo would have separated OOB infrastructure that you can build off on for the OOB network. This will come in handy the day/night a network change goes wrong and you lost access to the core.

Also in the old Sup1 on the N7K, they even have the CMP for OOB.


I've never seen a completely separate OOB network in real life. Sure, it'd be nice to have, but it's expensive and can be difficult logistically (If it doesn't touch the in-band network, how do I access it? Dedicated circuits for OOB, wireless, different network port, etc?). I think OOB ports are mostly just a false sense of security, honestly. They're just as easy to break as in-band management.

And that's an interesting point you made - they got rid of the CMP (much better than traditional OOB) because customers rarely used it.
http://blog.alwaysthenetwork.com

Reggle
Post Whore
Posts:
1956
Joined:
Sun May 15, 2011 4:16 pm
Certs:
CCNA Security, CCNP, CCDP

Re: Nexus Setup

Tue Jul 22, 2014 2:08 am

@ Vito: we have a completely separated OOB network here. And it pays off for sure.
- Layer 2 campus switches. Considering you don't need these OOB ports on the main Nexus infrastructure, it's not that expensive. For once even LAN Lite image without QoS is an option.
- One/A few VLANs for OOB
- For normal day-to-day management a firewall patched on this network with a physically different NIC on the main network infra.
- For disaster recovery scenarios an out of band entry into the network. We have some L3 PoE switches with some standalone Access Points that broadcast a DR SSID in the DC. In case of 'full meltdown' we connect to these directly and still have access.
- Some have considered an extra small internet uplink so we can VPN in as well, but I don't like the potential security issue.
http://reggle.wordpress.com

User avatar
Vito_Corleone
Moderator
Posts:
9850
Joined:
Mon Apr 07, 2008 10:38 am
Certs:
CCNP RS, CCNP DC, CCDP, CCIP

Re: Nexus Setup

Tue Jul 22, 2014 6:44 am

Reggle wrote:@ Vito: we have a completely separated OOB network here. And it pays off for sure.
- Layer 2 campus switches. Considering you don't need these OOB ports on the main Nexus infrastructure, it's not that expensive. For once even LAN Lite image without QoS is an option.
- One/A few VLANs for OOB
- For normal day-to-day management a firewall patched on this network with a physically different NIC on the main network infra.
- For disaster recovery scenarios an out of band entry into the network. We have some L3 PoE switches with some standalone Access Points that broadcast a DR SSID in the DC. In case of 'full meltdown' we connect to these directly and still have access.
- Some have considered an extra small internet uplink so we can VPN in as well, but I don't like the potential security issue.


Thanks for posting about your setup.

So, just to be clear, what happens if you're offsite, prod is down, and you need to access the OOB network? It sounds like you're good if you're onsite and there's an issue getting to the in-band management network (though, if you're onsite, couldn't you just console in?), but I don't see how (without the cheap circuit/VPN) you get to the OOB network when you're not onsite.
http://blog.alwaysthenetwork.com

Reggle
Post Whore
Posts:
1956
Joined:
Sun May 15, 2011 4:16 pm
Certs:
CCNA Security, CCNP, CCDP

Re: Nexus Setup

Tue Jul 22, 2014 8:18 am

That's what the extra internet uplink is being considered for: a device with static IP to VPN into directly in the OOB (using local usernames of course, no production Radius). That would introduce a backdoor though so it's something to be careful for.
And sure, you can console in, but being able to log in to the OOB and having all GUI clients at your disposal is better. We're talking ASDM & competitor firewall GUI clients, vSphere connections, Java applets for remote console for bare metal servers,... It's not just the network team, the OOB is for all infrastructure-related stuff.
Another consideration for that remote VPN is that the same VPN endpoint (e.g. dedicated ASA) would provide a public NAT to quickly download something on a vendor website we may need while on the DR wireless. The dedicated internet uplink would be some consumer-grade line, nothing part of our own BGP systems.
http://reggle.wordpress.com

User avatar
that1guy15
Post Whore
Posts:
3224
Joined:
Thu Apr 29, 2010 6:12 pm
Certs:
CCNP, CCDP, CCIP

Re: Nexus Setup

Tue Jul 22, 2014 12:07 pm

Reggle, that is a great breakdown of how you are addressing your OOB and sounds like a well thought out setup. Im sure my self included but people throw the term OOB around without realizing the amount of design and thought needed to deploy a true OOB.

Also for remote access OpenGear and Im sure other vendors offer 3G/4G cellular directly on some of their console servers. Ive been looking at picking up a couple for our MDF and DC for this reason.
http://blog.movingonesandzeros.net/

User avatar
Vito_Corleone
Moderator
Posts:
9850
Joined:
Mon Apr 07, 2008 10:38 am
Certs:
CCNP RS, CCNP DC, CCDP, CCIP

Re: Nexus Setup

Tue Jul 22, 2014 12:39 pm

Reggle wrote:That's what the extra internet uplink is being considered for: a device with static IP to VPN into directly in the OOB (using local usernames of course, no production Radius). That would introduce a backdoor though so it's something to be careful for.
And sure, you can console in, but being able to log in to the OOB and having all GUI clients at your disposal is better. We're talking ASDM & competitor firewall GUI clients, vSphere connections, Java applets for remote console for bare metal servers,... It's not just the network team, the OOB is for all infrastructure-related stuff.
Another consideration for that remote VPN is that the same VPN endpoint (e.g. dedicated ASA) would provide a public NAT to quickly download something on a vendor website we may need while on the DR wireless. The dedicated internet uplink would be some consumer-grade line, nothing part of our own BGP systems.


I guess I'm still not seeing much value in your implementation, honestly.

I don't see why OOB management for servers (iLO, CIMC, etc) needs to to be on a dedicated network (obviously separate VLAN, but not separate physical infrastructure). If the network is hosed enough to prevent the server guys from getting to a dedicated management VLAN, I'm not sure what good it does for them to get to their OOB interface. Obviously there are much bigger issues than bootstrapping a server. I can't really think of a scenario where managing a server will fix a core outage - maybe you could argue that a loop would do it, but if (big if here) a server guy knows his device is creating a loop, I'd prefer he just unplugged it instead of taking the time to login OOB and fix it.

As for GUIs, I'm, again, not really seeing a scenario where this adds a ton of value. I suppose you could jack an appliance up so badly that it takes the entire network down, but getting to the GUI isn't going to be my first thought. Especially if I have to go into the DC to get on the OOB wireless network anyway. It does seem a little more useful, but you still have to be onsite to use it if you don't have a VPN or some other external connectivity.

I just fail to see how your implementation really "pays off". With the VPN added, I can see some value, but I still believe what I said earlier - it's expensive and logistically complex for little true value. IMO, your current setup is mostly window dressing.

I've seen and deployed various OOB pieces for remote DCs, like console servers (using console ports, not OOB ethernet) for downstream devices (MoR, ToR, etc), but I've yet to see a truly OOB deployment (with external access - otherwise, again, what's the point?) that is completely separate from the prod network.
http://blog.alwaysthenetwork.com

Reggle
Post Whore
Posts:
1956
Joined:
Sun May 15, 2011 4:16 pm
Certs:
CCNA Security, CCNP, CCDP

Re: Nexus Setup

Tue Jul 22, 2014 2:56 pm

Good arguments. I'm not going to defend this design in detail: I have no real arguments for it as I haven't faced an actual disaster scenario myself yet where I learned the key differences. Maybe it does turn out to be window dressing if it happens.
If anything, it keeps running when the core production network is down due to a loop or unexpected non-redundant component, giving you the chance to check logs on multiple platforms trying to find out what's going on.
Expenses: it will probably be more, yes. But logistically complex: I'll stick with 'no' here. Just one or maximum a few VLANs and every OOB port of a server on these switches was easy to explain to the cabling team, with all ports having the same configuration. You barely need to manage these OOB switches, they become similar to a small remote office as far as time spent configuring stuff goes.
http://reggle.wordpress.com

PreviousNext

Return to Cisco Routing and Switching

Who is online

Users browsing this forum: Google [Bot], Majestic-12 [Bot] and 121 guests