Category Archives: JNCIS-SEC

JNCIS-SEC – Exam JN0-332 passed!

Can’t believe it has already been two months since I last posted on my blog… The last two weeks of October I had to go into full-on study mode for the JN0-332 exam so I had to pause on the write-ups for a while. Fortunately the hard work paid off and after a good six months of study and labbing, with a hot summer in between, I sat the JN0-332 on the 26th of October and passed it with 83 percent.

The exam itself was very fair and covered all the topics on the blueprint and in the freely available Juniper Study Guides. I have sat a few “other vendor” exams where multiple questions caught me off-guard and were definitely not included in the related cert guides. Not really the case with this Juniper exam though, so hats off to Juniper for delivering a consistent test.

Another great thing is that Juniper lets you go back through the questions when you’re done. You can also mark questions, which you’re uncertain about, for review. When you’re done, and if you still have some time on the counter, you can run through it once more to fill in the gaps. Or perhaps one of the later questions made you rethink your answer on an earlier question. I’m certain this approach will earn test-takers some points.

The Score Report is pretty straight-forward. It breaks the exam down into the main topics, the total number of questions, number of correct answers and your total percentage. My worst scores were on Firewall User Authentication, which to be fair only had a couple of questions, and UTM, which I couldn’t fully simulate in my SRX100 and vSRX lab. Fair to say I wasn’t surprised with the outcome, but still very satisfied overall.

For anyone interested in the JNCIS-SEC certification, here is what I used to pass the exam…

JNCIS-SEC Study Guides

Probably the most important source of information for this exam, Juniper is offering free Study Guides for all Specialist tracks on their site. You will need an account but registration is free so there really is no excuse. Follow this link

The first PDF covers SRX basics, Security Zones, Policies, User Authentication, NAT, IPsec and Clustering. The second PDF is dedicated to the UTM features.

Juniper SRX Series – By Brad Woodberg, Rob Cameron

Juniper SRX Series - O'Reilly

I bought this book early on, when I first encountered the SRX at a new job. Weighing in at about 1000 pages, it’s the perfect reference for anyone dealing with the SRX on a daily basis. It’s not something you’ll read front to back though, and I’ve found myself reading through the chapters for whatever feature I need at a certain point. For example, the chapter on Screens gives you an in-depth review of each of the features, the attacks for which they were written, and so on. Highly recommended! I’ve also found that you can read the book online

Juniper vSRX – Firefly

The virtual edition of the SRX firewall. You can run a trial version on your Hypervisor and even try the Advanced Features for 60 days. More information here.

Three SRX100Bs

I bought three of these boxes for cheap on eBay. They don’t have the high memory so don’t support UTM or IPS, but they are great for configurations that are hard to do on the virtual appliance, like aggregated interfaces and clustering. Added bonus is that they also support routing and switching so they can be used for the ENT track also.

Junos Genius app

The official Juniper app for JNCIA and Specialist level exams. Whenever I had a few minutes off I would go through some practice questions. Very good to keep your mind on the content and memorise some of the technical details. I just wish they had a PC edition! 🙂

JUNOS GENIUS – Android version
JUNOS GENIUS – IOS version

Next challenge?

For a few weeks I was working on the Brocade Certified vRouter Engineer certification. I already touched some Vyatta/VyOS routers so figured I might as well try the free exam. Unfortunately, when I tried to book the exam, I found that the Brocade voucher had expired. I tried mailing Brocade but to no avail – they confirmed that the promotion was no longer running. That sort of halted the BCVRE endeavour…

For a good two weeks now I’ve been going through the JNCIS-ENT study guides. Bought myself a couple EX switches which will be complementing the SRX and vSRX I already have in the lab. It should also be a good refresh of the routing & switching topics as my CCNP is up for renewal in December 2016. As I did for JNCIS-SEC, I will be writing up my ENT labs on the blog.

If you are interested in the JNCIS-SEC certification, and have any other questions, feel free to post them in the comments section. Thank you for reading!

Juniper SRX Clustering with LACP

Most deployment guides for SRX clusters out there focus on standard two-port deployments, where you have one port in, one port out and a couple of cluster links that interconnect and control the cluster. Unfortunately, in that design, one simple link failure will usually make the cluster fail over. Coming from the R&S realm, I am very careful when it comes to physical redundancy so I wanted to figure out a way to get this working with Etherchannels.

After a lot of reading and some trial and error, I ended up with a working solution. Probably not perfect, but definitely more redundant. So, in this post we will get the topology below configured and afterwards do some failover testing.

Physical connections

Basic Cluster Setup

The commands below will get the basic cluster up and running, assuming you have already configured the cluster and node ids.

set system root-authentication encrypted-password "$1$FQl4d.NC$l25c0bDGr5aPq9ZHx0R.S."
set groups node0 system host-name FW01A
set groups node1 system host-name FW01B
set apply-groups "${node}"
set chassis cluster reth-count 2
set chassis cluster redundancy-group 0 node 0 priority 120
set chassis cluster redundancy-group 0 node 1 priority 1
set chassis cluster redundancy-group 1 node 0 priority 120
set chassis cluster redundancy-group 1 node 1 priority 1
set chassis cluster redundancy-group 1 preempt
set interfaces fab0 fabric-options member-interfaces fe-0/0/5
set interfaces fab1 fabric-options member-interfaces fe-1/0/5

Layer3 Etherchannel configuration on the SRX

The physical ports will be bundled in two reth interface. Per reth interface, we will add two physical ports per cluster node, which yields a total of four ports. However only two links will ever be actively forwarding traffic – those on the active RG.

Physical connections

It is important that the links from every cluster nodes are terminated on separate etherchannels on the switches. Otherwise the switches would also load-balance the traffic over the two non-forwarding ports, as documented here…
In my topology, both switches have a Port-Channel 11 and 12, going to their own cluster node.

{primary:node0}
root@FW01A> show configuration interfaces
fe-0/0/0 {
    fastether-options {
        redundant-parent reth0;
    }
}
fe-0/0/1 {
    fastether-options {
        redundant-parent reth0;
    }
}
fe-0/0/2 {
    fastether-options {
        redundant-parent reth1;
    }
}
fe-0/0/3 {
    fastether-options {
        redundant-parent reth1;
    }
}
fe-1/0/0 {
    fastether-options {
        redundant-parent reth0;
    }
}
fe-1/0/1 {
    fastether-options {
        redundant-parent reth0;
    }
}
fe-1/0/2 {
    fastether-options {
        redundant-parent reth1;
    }
}
fe-1/0/3 {
    fastether-options {
        redundant-parent reth1;
    }
}

Pay special attention to the configuration of the reth interfaces. First, we tie the redundant interfaces to our redundancy-group 1, in which we will later control the failover conditions.

The minimum-links command determines how many interfaces must be up before the LACP bundle is considered up. Although we have two interfaces per cluster, we still want traffic forwarded in a worst-case scenario. Setting minimum-links 1 will keep the Etherchannel up even in the unlikely event of having three physical ports down.

reth0 {
    vlan-tagging;
    redundant-ether-options {
        redundancy-group 1;
        minimum-links 1;
        lacp {
            active;
            periodic fast;
        }
    }
    unit 111 {
        vlan-id 111;
        family inet {
            address 1.1.1.1/28;
        }
    }
}
reth1 {
    vlan-tagging;
    redundant-ether-options {
        redundancy-group 1;
        minimum-links 1;
        lacp {
            active;
            periodic fast;
        }
    }
    unit 255 {
        vlan-id 255;
        family inet {
            address 10.255.255.254/28;
        }
    }
}

By adding the vlan-tagging command, and adding a logical subinterface (unit 255) with vlan-id 255 specified, we are creating a tagged L3 Etherchannel. In other words, the SRX is expecting packets for sub-interface reth1.255 to be tagged with a dot1q value of 255. The unit number can be any number you like, but it’s best to keep the unit number and the dot1q value aligned – for your own sanity! 🙂

Security Zones and Policies

To check basic connectivity and later run some failover tests, I have added the following configuration for the Security Zones, Policies and Source NAT. In a production environment, you probably won’t be allowing ping.

root@FW01A# show zones
security-zone trust {
    address-book {
        address Net-10.255.255.240-28 10.255.255.240/28;
    }
    host-inbound-traffic {
        system-services {
            ping;
        }
    }
    interfaces {
        reth1.255;
    }
}
security-zone untrust {
    address-book {
        address Net-1.1.1.0-28 1.1.1.0/28;
    }
    interfaces {
        reth0.111 {
            host-inbound-traffic {
                system-services {
                    ping;
                }
            }
        }
    }
}
{primary:node0}[edit security]
root@FW01A# show policies
from-zone trust to-zone untrust {
    policy Test-Policy {
        match {
            source-address Net-10.255.255.240-28;
            destination-address Net-1.1.1.0-28;
            application any;
        }
        then {
            permit;
        }
    }
}
{primary:node0}[edit security]
root@FW01A# show nat
source {
    rule-set SNAT-Trust-to-Untrust {
        from zone trust;
        to zone untrust;
        rule HideNAT-1 {
            match {
                source-address 10.255.255.240/28;
            }
            then {
                source-nat {
                    interface;
                }
            }
        }
    }
}

Preparing the switches for L2

I’m running two Cisco 3550 switches as my layer3 core switches. For now, I will just add the Layer 1 and 2 stuff.

vlan 111
 name untrust
!
vlan 255
 name transit


interface range fastEthernet 0/1 - 2
 channel-group 11 mode active

interface range fastEthernet 0/3 - 4
 channel-group 12 mode active

int po 11
 switchport trunk encapsulation dot1q
 switchport mode trunk
 switchport nonegotiate
 switchport trunk allowed vlan 111

int po 12
 switchport mode trunk
 switchport nonegotiate
 switchport trunk allowed vlan 255

For CS2, we can copy paste the exact same config.

A second trunk link is added between the core switches, which will carry inter-switch traffic for VLAN 111 and 255.

interface FastEthernet0/23
 channel-group 1 mode active
!
interface FastEthernet0/24
 channel-group 1 mode active

...

interface Port-channel1
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 111,255
 switchport mode trunk
 switchport nonegotiate

The LACP configuration is now complete, and after cabling everything up we see the Port-channels are in the bundled state:

NP-CS1>show etherchannel summary
Flags:  D - down        P - bundled in port-channel
        I - stand-alone s - suspended
        H - Hot-standby (LACP only)
        R - Layer3      S - Layer2
        U - in use      f - failed to allocate aggregator

        M - not in use, minimum links not met
        u - unsuitable for bundling
        w - waiting to be aggregated
        d - default port


Number of channel-groups in use: 3
Number of aggregators:           3

Group  Port-channel  Protocol    Ports
------+-------------+-----------+-----------------------------------------------
1      Po1(SU)         LACP      Fa0/23(P)   Fa0/24(P)
11     Po11(SU)        LACP      Fa0/1(P)    Fa0/2(P)
12     Po12(SU)        LACP      Fa0/3(P)    Fa0/4(P)

The SRX is bit more detailed with the information in the lacp command

root@FW01A> show lacp interfaces
Aggregated interface: reth0
    LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
      fe-0/0/0       Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
      fe-0/0/0     Partner    No    No   Yes  Yes  Yes   Yes     Slow    Active
      fe-0/0/1       Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
      fe-0/0/1     Partner    No    No   Yes  Yes  Yes   Yes     Slow    Active
      fe-1/0/0       Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
      fe-1/0/0     Partner    No    No   Yes  Yes  Yes   Yes     Slow    Active
      fe-1/0/1       Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
      fe-1/0/1     Partner    No    No   Yes  Yes  Yes   Yes     Slow    Active
    LACP protocol:        Receive State  Transmit State          Mux State
      fe-0/0/0                  Current   Slow periodic Collecting distributing
      fe-0/0/1                  Current   Slow periodic Collecting distributing
      fe-1/0/0                  Current   Slow periodic Collecting distributing
      fe-1/0/1                  Current   Slow periodic Collecting distributing

Layer3 Switch Configuration

Now that we have our layer 2 connectivity, we can move on to the IP addressing. On the L3 switches, I will configure the SVIs in the transit network, and run HSRP between them with hello/hold timers of 1 and 3 seconds. This would allow for a reasonable failover time in case of an outage.

NP-CS1:

interface Vlan255
 ip address 10.255.255.242 255.255.255.240
 standby 255 ip 10.255.255.241
 standby 255 timers 1 3
 standby 255 priority 110
 standby 255 preempt

NP-CS2:

NP-CS2#sh run int vlan 255
!
interface Vlan255
 ip address 10.255.255.243 255.255.255.240
 standby 255 ip 10.255.255.241
 standby 255 timers 1 3
 standby 255 preempt

The VLAN interfaces come up immediately and as a final test for basic L3 connectivity, we try pinging the SRX from the core switch.

NP-CS1#ping 10.255.255.254 repeat 10

Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 10.255.255.254, timeout is 2 seconds:
!!!!!!!!!!
Success rate is 100 percent (10/10), round-trip min/avg/max = 1/2/4 ms

Checking the ARP table, we can see the Cluster MAC address, with the 4th last digit (1) being the cluster ID and the last two (01) the reth number.

NP-CS1#show ip arp
Protocol  Address          Age (min)  Hardware Addr   Type   Interface
Internet  10.255.255.254          1   0010.dbff.1001  ARPA   Vlan255
Internet  10.255.255.242          -   0012.7f81.8f00  ARPA   Vlan255
Internet  10.255.255.243          8   000d.ed6f.9680  ARPA   Vlan255
Internet  10.255.255.241          -   0000.0c07.acff  ARPA   Vlan255

The default route on both core switches is pointed to 10.255.255.254

ip route 0.0.0.0 0.0.0.0 10.255.255.254

Internet segment configuration

I have connected two routers to the fa0/5 interfaces of the switches and added them to vlan 111. They are running HSRP with a VIP of 1.1.1.14 and hello/hold timers of 1 and 3 seconds.

ISP1#show standby brief
                     P indicates configured to preempt.
                     |
Interface   Grp  Pri P State   Active          Standby         Virtual IP
Fa0/1       111  110   Active  local           1.1.1.13        1.1.1.14

Running a simple ping to test connectivity between the firewalls and the ISP router.

root@FW01A> ping 1.1.1.14 count 1
PING 1.1.1.14 (1.1.1.14): 56 data bytes
64 bytes from 1.1.1.14: icmp_seq=0 ttl=255 time=2.981 ms

*snip*

To test if traffic is being forwarded across the firewall, we finally try reaching the ISP routers from the core switches.

NP-CS1#ping 1.1.1.14 source vlan 255

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 1.1.1.14, timeout is 2 seconds:
Packet sent with a source address of 10.255.255.242
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms

After many bits of configuration, our cluster is live so we can start implementing some failover mechanisms.

Configuring the interface-monitor

When the interface-monitor is configured, it sets a threshold value of 255 to each redundancy group. Each interface is assigned a custom weight for the redundancy group, which is detracted from the threshold if the interface goes physically down. When the threshold reaches zero, the redundancy group and all its objects fail over.

Note – a commmon mistake (trust me) is to mix up the interface monitor weights with the RG priorities. These are two separate values, the interface monitor threshold is always 255 by default.

In my lab scenario, I will give each port a value of 128. Two physical links down will trigger a failover to node 1. This is the final configuration for redundancy group 1:

{primary:node0}[edit chassis cluster]
root@FW01A# show
reth-count 2;
redundancy-group 0 {
    node 0 priority 120;
    node 1 priority 1;
}
redundancy-group 1 {
    node 0 priority 120;
    node 1 priority 1;
    preempt;
    interface-monitor {
        fe-0/0/0 weight 128;
        fe-0/0/1 weight 128;
        fe-1/0/0 weight 128;
        fe-1/0/1 weight 128;
        fe-0/0/2 weight 128;
        fe-0/0/3 weight 128;
        fe-1/0/3 weight 128;
        fe-1/0/2 weight 128;
    }

Failover scenarios

  • First, we will physically disconnect fe-0/0/0 which should not impact our regular traffic flow.

Disconnecting fe-0/0/0/

{primary:node0}
root@FW01A> show interfaces terse | match fe-0/0/0
fe-0/0/0                up    down
fe-0/0/0.111            up    down aenet    --> reth0.111
fe-0/0/0.32767          up    down aenet    --> reth0.32767

The JSRPD daemon logged the following entries in which we can see it decrement the value of 128 of the global threshold of 255.

root@FW01A> show log jsrpd | last | match 19:30
Oct  6 19:30:31 Interface fe-0/0/0 is going down
Oct  6 19:30:31 fe-0/0/0 interface monitored by RG-1 changed state from Up to Down
Oct  6 19:30:31 intf failed, computed-weight -128
Oct  6 19:30:31 LED changed from Green to Amber, reason is Monitored objects are down
Oct  6 19:30:31 Current threshold for rg-1 is 127. Failures: interface-monitoring
Oct  6 19:30:42 printing fpc_num h0
Oct  6 19:30:42 jsrpd_ifd_msg_handler: Interface reth0 is up
Oct  6 19:30:42 reth0 from  jsrpd_ssam_reth_read reth_rg_id=1

The chassis cluster interfaces also shows us the interface as down and how much weight it had:

root@FW01A> show chassis cluster interfaces | find Monitoring
Interface Monitoring:
    Interface         Weight    Status    Redundancy-group
    fe-1/0/2          128       Up        1
    fe-1/0/3          128       Up        1
    fe-0/0/3          128       Up        1
    fe-0/0/2          128       Up        1
    fe-1/0/1          128       Up        1
    fe-1/0/0          128       Up        1
    fe-0/0/1          128       Up        1
    fe-0/0/0          128       Down      1

Because our threshold value is still at 127 (255 – 128), RG1 is still active on node0 and no failover event was triggered.

Redundancy group: 1 , Failover count: 0
    node0                   120         primary        yes      no
    node1                   1           secondary      yes      no
  • As a second test, we will also unplug fe-0/0/1. Without the interface monitoring, this would halt the forwarding of traffic to the untrust zone as reth0 would have no more interfaces on node0.

Disconnecting fe-0/0/1 as well

To get a basic idea of the failover time, I’m running a ping test from the core switch.

Sending 1000, 100-byte ICMP Echos to 1.1.1.14, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
*snip*

We lost one packet during the failover with a timeout of two seconds, which is not bad at all. The switchover is stateful so TCP connections should be able to handle this just fine. Let’s see what happened on the SRX…

Our interface monitor shows two interfaces down that had a weight of 128. The threshold should now have reached zero which retired node0 from the cluster.

root@FW01A> show chassis cluster interfaces | find Monitoring
Interface Monitoring:
    Interface         Weight    Status    Redundancy-group
    fe-1/0/2          128       Up        1
    fe-1/0/3          128       Up        1
    fe-0/0/3          128       Up        1
    fe-0/0/2          128       Up        1
    fe-1/0/1          128       Up        1
    fe-1/0/0          128       Up        1
    fe-0/0/1          128       Down      1
    fe-0/0/0          128       Down      1

Redundancy group 1 is now primary on node1. The routing engine is still active on node0.

root@FW01A> show chassis cluster status
Cluster ID: 1
Node                  Priority          Status    Preempt  Manual failover

Redundancy group: 0 , Failover count: 0
    node0                   120         primary        no       no
    node1                   1           secondary      no       no

Redundancy group: 1 , Failover count: 1
    node0                   0           secondary      yes      no
    node1                   1           primary        yes      no

And the JSRPD logged documented the entire event:

Oct  6 19:43:01 Interface fe-0/0/1 is going down
Oct  6 19:43:01 fe-0/0/1 interface monitored by RG-1 changed state from Up to Down
Oct  6 19:43:01 intf failed, computed-weight -256
Oct  6 19:43:01 RG(1) priority changed on node0 120->0 Priority is set to 0, Monitoring objects are down
Oct  6 19:43:01 Successfully sent an snmp-trap due to priority change from 120 to 0 on RG-1 on cluster 1 node 0. Reason: Priority is set to 0, Monitoring objects are down
Oct  6 19:43:01 Current threshold for rg-1 is -1. Setting priority to 0. Failures: interface-monitoring
Oct  6 19:43:01 Both the nodes are primary. RG-1 PRIMARY->SECONDARY_HOLD due to preempt/yield, my priority 0 is worse than other node's priority 1
Oct  6 19:43:01 Successfully sent an snmp-trap due to a failover from primary to secondary-hold on RG-1 on cluster 1 node 0. Reason: Monitor failed: IF
Oct  6 19:43:01 updated rg_info for RG-1 with failover-cnt 1 state: secondary-hold into ssam. Result = success, error: 0
Oct  6 19:43:01 reth0 ifd state changed from node0-primary -> node1-primary for RG-1
Oct  6 19:43:01 reth1 ifd state changed from node0-primary -> node1-primary for RG-1
Oct  6 19:43:01 updating primary-node as node1 for RG-1 into ssam. Previous primary was node0. Result = success, 0
Oct  6 19:43:01 Successfully sent an snmp-trap due to a failover from primary to secondary-hold on RG-1 on cluster 1 node 0. Reason: Monitor failed: IF
Oct  6 19:43:01 printing fpc_num h0
Oct  6 19:43:01 jsrpd_ifd_msg_handler: Interface reth0 is up
Oct  6 19:43:01 reth0 from  jsrpd_ssam_reth_read reth_rg_id=1
Oct  6 19:43:01 printing fpc_num h1
Oct  6 19:43:01 jsrpd_ifd_msg_handler: Interface reth1 is up
Oct  6 19:43:01 reth1 from  jsrpd_ssam_reth_read reth_rg_id=1
Oct  6 19:43:02 SECONDARY_HOLD->SECONDARY due to back to back failover timer expiry for RG-1
Oct  6 19:43:02 Successfully sent an snmp-trap due to a failover from secondary-hold to secondary on RG-1 on cluster 1 node 0. Reason: Back to back failover interval expired
Oct  6 19:43:02 updated rg_info for RG-1 with failover-cnt 1 state: secondary into ssam. Result = success, error: 0
Oct  6 19:43:02 Successfully sent an snmp-trap due to a failover from secondary-hold to secondary on RG-1 on cluster 1 node 0. Reason: Back to back failover interval expired

What is interesting is that the SRX is explicitly sending an SNMP trap for these events, so make sure you have a good monitoring tool and trap receiver in place, preferably with alerting.

  • As the ultimate test, I will unplug interface fe-1/0/0, the first port of node1, which will make the whole LACP bundle reth0 run on just one link. If we had configured minimum-links 2 this action would bring down the whole reth bundle

We are riding on one wheel now!

After pulling the cable, reth0 is strolling along on just one link:

root@FW01A> show lacp interfaces reth0 | find protocol
    LACP protocol:        Receive State  Transmit State          Mux State
      fe-0/0/0            Port disabled     No periodic           Detached
      fe-0/0/1            Port disabled     No periodic           Detached
      fe-1/0/0            Port disabled     No periodic           Detached
      fe-1/0/1                  Current   Slow periodic Collecting distributing

And the Etherchannel is still up:

root@FW01A> show interfaces terse | match reth0
reth0                   up    up
reth0.111               up    up   inet     1.1.1.1/28
reth0.32767             up    up
root@FW01A> show chassis cluster interfaces | find Monitoring
Interface Monitoring:
    Interface         Weight    Status    Redundancy-group
    fe-1/0/2          128       Up        1
    fe-1/0/3          128       Up        1
    fe-0/0/3          128       Up        1
    fe-0/0/2          128       Up        1
    fe-1/0/1          128       Up        1
    fe-1/0/0          128       Down      1
    fe-0/0/1          128       Down      1
    fe-0/0/0          128       Down      1

For anyone designing a network solution with high-availability in mind, this is all very promising. Even with 75% of our physical links down, the reth will stay functional and forward traffic.

I have now reconnected all cabling for a last question I’ve been pondering about – how does the interface monitor react when physical ports go down on both nodes? Suppose we are running our trunks to a chassis or VSS and lose a linecard or stack-member? Does the interface weight count globally and trigger a failover? Let’s find out…

  • Disconnecting fe-0/0/0 and fe-1/0/0, the first port on each node which is in reth0
root@FW01A> show interfaces terse | match "fe-0/0/0|fe-1/0/0"
fe-0/0/0                up    down
fe-0/0/0.111            up    down aenet    --> reth0.111
fe-0/0/0.32767          up    down aenet    --> reth0.32767
fe-1/0/0                up    down
fe-1/0/0.111            up    down aenet    --> reth0.111
fe-1/0/0.32767          up    down aenet    --> reth0.32767

Both links, which both had a weight of 128, are now down but RG1 is still rolling along fine on node0 and the priority is not set to 0. This does prove that the interface-monitor value is tied to the node on which the interface resides.

Redundancy group: 1 , Failover count: 2
    node0                   120         primary        yes      no
    node1                   1           secondary      yes      no

Let’s disconnect fe-0/0/2 and fe-1/0/2 to simulate a really bad day. Our interface monitor now shows half our revenue ports down:

Interface Monitoring:
    Interface         Weight    Status    Redundancy-group
    fe-1/0/2          128       Down      1
    fe-1/0/3          128       Up        1
    fe-0/0/3          128       Up        1
    fe-0/0/2          128       Down      1
    fe-1/0/1          128       Up        1
    fe-1/0/0          128       Down      1
    fe-0/0/1          128       Up        1
    fe-0/0/0          128       Down      1

Interestingly though, both nodes have their priorities now set to zero, but node0 is still primary for RG1.

Redundancy group: 0 , Failover count: 0
    node0                   120         primary        no       no
    node1                   1           secondary      no       no

Redundancy group: 1 , Failover count: 2
    node0                   0           primary        yes      no
    node1                   0           secondary      yes      no

Is there still traffic being sent through though? Running a quick ping from CS2:

NP-CS1#ping 1.1.1.14

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 1.1.1.14, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/3/4 ms

Conclusion

This turned into quite a lengthy post but hopefully demonstrated the benefits of having an Etherchannel over the standard single-port configurations you’ll find in most documentation. Bundling your physical interfaces in a LAG gives you that extra layer of physical redundancy with the added benefit of load sharing on the links. We were able to “lose” 75 percent of our revenue ports without any significant impact. Adding the tagged Layer3 interfaces also gives us the option to add more logical units in the future, which can in turn be assigned to their own routing-instances and zones if you’re dealing with a large-scale or multi-tenant environment.

If this was helpful for you or if you have a remark, please let me know below in the comments! Thanks for reading 🙂

External links that added to my understanding of this topic

Failure is optional – Reth Interfaces and LACP
Juniper KB – Link aggregation (LACP) supported/non-supported configurations on SRX and EX
The free JNCIS-SEC Study Guides at Juniper.net

JNCIS-SEC – Juniper SRX100 Cluster Configuration

In this post I will go through the basics of cluster configuration on the SRX. I still have a couple of SRX100s laying around, which is perfect to cover the clustering topics of the JNCIS-SEC blueprint!

Before you start configuring the cluster, always verify that both your boxes are on the same software version.

root@FW01A> show system software
Information for junos:

Comment:
JUNOS Software Release [12.1X44-D40.2]

Physical Wiring

Here is how the cluster will be cabled up. Because there’s a lot to remember during the configuration, it’s best to make this sort of diagram before you begin.

SRX100B Cluster Wiring

Connect the fxp1 and Fab ports

The Control link (fxp1) is used to synchronize configuration and performs cluster health checks by sending heartbeat messages. The physical port location depends on the SRX model, and can also be configurable on the high-end models. In my case, on the branch SRX100B, the fe0/0/07 interfaces are predetermined as fxp1.

The fab interface is used to exchange all the session state information between both devices. This provides a stateful failover if anything happens to the primary cluster node. You can choose which interface to assign. I will use fe/0/0/5 so all the first ports stay available.

Setting the Cluster-ID and Node ID

First, wipe all the old configuration and put both devices in cluster mode. Some terminology:

  • The cluster ID ranges from 1 to 15 and uniquely identifies the cluster if you have multiple clusters across the network. I will use Cluster ID 1
  • The node ID identifies both members in the cluster. A cluster will only have two members ever, so the options are 0 and 1

The commands below are entered in operational mode:

root@FW01A> set chassis cluster cluster-id 1 node 0 reboot
Successfully enabled chassis cluster. Going to reboot now
root@FW01B> set chassis cluster cluster-id 1 node 1 reboot
Successfully enabled chassis cluster. Going to reboot now

Keep attention when you enter the commands above. Make sure you are actually enabling the cluster, not disabling it. That would return the following message:

 Successfully disabled chassis cluster. Going to reboot now

Configuring the management interfaces

Once the devices have restarted we can move on to the configuration part.

To get out-of-band access to your firewalls, you really should configure both the members with a managment IP on the fxp0 interface.
All member-specific configuration is applied under the groups node-memmbers stanza. This is also where the hostnames are configured.

{primary:node0}[edit groups]
root@FW01A# show
node0 {
    system {
        host-name FW01A;
    }
    interfaces {
        fxp0 {
            unit 0 {
                family inet {
                    address 192.168.1.1/24;
                }
            }
        }
    }
}
node1 {
    system {
        host-name FW01B;
    }
    interfaces {
        fxp0 {
            unit 0 {
                family inet {
                    address 192.168.1.2/24;
                }
            }
        }
    }
}

On the SRX100B, the fxp0 interface is automatically mapped to the fe0/0/6 interface. Be sure to check the documentation for your specific model.

Apply Group

Before committing, don’t forget to include the command below . This ensures that node-specific config is only applied to that particular node.

{primary:node0}[edit]
root@FW01A# set apply-groups "${node}"

Configuring the fabric interface

The next step is to configure your fabric links, which are used to exchange the session state. Node0 has the fab0 interface and Node1 has the fab1 interface.

{primary:node0}[edit interfaces]
root@FW01A# show
fab0 {
    fabric-options {
        member-interfaces {
            fe-0/0/5;
        }
    }
}
fab1 {
    fabric-options {
        member-interfaces {
            fe-1/0/5;
        }
    }
}

After a commit, we can see both the control and fabric links are up.

root@FW01A# run show chassis cluster interfaces
Control link status: Up

Control interfaces:
    Index   Interface        Status
    0       fxp1             Up

Fabric link status: Up

Fabric interfaces:
    Name    Child-interface    Status
                               (Physical/Monitored)
    fab0    fe-0/0/5           Up   / Up
    fab0
    fab1    fe-1/0/5           Up   / Up
    fab1

Redundant-pseudo-interface Information:
    Name         Status      Redundancy-group
    lo0          Up          0

Configuring the Redundancy Groups

The redundancy group is where you configure the cluster’s failover properties relating to a collection of interfaces or other objects. RG0 is configured by default when you activate the cluster, and manages the redundancy for the routing engines. Let’s create a new RG 1 for our interfaces.

{secondary:node0}[edit chassis]
root@FW01A# show
cluster {
    redundancy-group 0 {
        node 0 priority 100;
        node 1 priority 1;
    }
    redundancy-group 1 {
        node 0 priority 100;
        node 1 priority 1;
    }
}

Configuring Redundant Ethernet interfaces

The reth interfaces are bundles of physical ports across both cluster members. The child interfaces inherit the configuration from the overlying reth interface – think of it as being similar to an 802.3ad Etherchannel. In fact, you can use an Etherchannel to use more than one physical port on each node.

{secondary:node0}[edit chassis cluster]
root@FW01A# set reth-count 2

After entering this command, you can do a quick commit, which will make the reth interfaces visible in the terse command.

root@FW01A# run show interfaces terse | match reth
reth0                   up    down
reth1                   up    down

Now you can configure the reth interfaces as you would with any other interface, give them an IP and assign them to the Redundancy Group.

reth0 is our outside interface, and reth1 is the inside.

{secondary:node0}[edit interfaces]
root@FW01A# show reth0
redundant-ether-options {
    redundancy-group 1;
}
unit 0 {
    family inet {
        address 1.1.1.1/24;
    }
}
{secondary:node0}[edit interfaces]
root@FW01A# show reth1
redundant-ether-options {
    redundancy-group 1;
}
unit 0 {
    family inet {
        address 10.0.0.1/24;
    }
}

When our reths are configured, we can add our physical ports. The fe-0/0/0 (node0) and fe-1/0/0 (node1) will join reth0, fe-0/0/1 and fe-1/0/1 will join reth1.

{secondary:node0}[edit interfaces]
root@FW01A# show
fe-0/0/0 {
    fastether-options {
        redundant-parent reth0;
    }
}
fe-0/0/1 {
    fastether-options {
        redundant-parent reth1;
    }
}
fe-1/0/0 {
    fastether-options {
        redundant-parent reth0;
    }
}
fe-1/0/1 {
    fastether-options {
        redundant-parent reth1;
    }
}

Interface monitoring

We can use interface monitoring to subtract a predetermined priority value off our redundancy group priority, when a link goes physically down.

For example, node0 is primary for RG1 with a priority of 100. If we add an interface-monitor value of anything higher than 100 to the physical interface, the link-down event will cause to priority to drop to zero and trigger the failover. Configuration is applied at the redundancy-groups:

{primary:node0}[edit chassis cluster]
root@FW01A# show
reth-count 2;
redundancy-group 0 {
    node 0 priority 100;
    node 1 priority 1;
}
redundancy-group 1 {
    node 0 priority 100;
    node 1 priority 1;
    preempt;
    gratuitous-arp-count 5;
    interface-monitor {
        fe-0/0/0 weight 255;
        fe-0/0/1 weight 255;
        fe-1/0/0 weight 255;
        fe-1/0/1 weight 255;
    }
}

Finally, we add the interfaces to security zones.

root@FW01A# show security zones
security-zone untrust {
    interfaces {
        reth0.0;
    }
}
security-zone trust {
    host-inbound-traffic {
        system-services {
            ping;
        }
    }
    interfaces {
        reth1.0;
    }
}

Verification

After cabling it up, we can verify that the cluster is fully operational.

root@FW01A> show chassis cluster status
Cluster ID: 1
Node                  Priority          Status    Preempt  Manual failover

Redundancy group: 0 , Failover count: 0
    node0                   100         primary        no       no
    node1                   1           secondary      no       no

Redundancy group: 1 , Failover count: 0
    node0                   100         primary        yes      no
    node1                   1           secondary      yes      no
root@FW01A> show chassis cluster interfaces
Control link status: Up

Control interfaces:
    Index   Interface        Status
    0       fxp1             Up

Fabric link status: Up

Fabric interfaces:
    Name    Child-interface    Status
                               (Physical/Monitored)
    fab0    fe-0/0/5           Up   / Up
    fab0
    fab1    fe-1/0/5           Up   / Up
    fab1

Redundant-ethernet Information:
    Name         Status      Redundancy-group
    reth0        Up          1
    reth1        Up          1

Redundant-pseudo-interface Information:
    Name         Status      Redundancy-group
    lo0          Up          0

Interface Monitoring:
    Interface         Weight    Status    Redundancy-group
    fe-1/0/1          255       Up        1
    fe-1/0/0          255       Up        1
    fe-0/0/1          255       Up        1
    fe-0/0/0          255       Up        1

Let’s see how long the failover takes, by unplugging one of the links in the trust zone.

Pinging from the inside switch to the reth1 address 10.0.0.1

theswitch#ping 10.0.0.1 repeat 10000

Type escape sequence to abort.
Sending 10000, 100-byte ICMP Echos to 10.0.0.1, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Excellent, only lost one ping while failing over to node1 which would be 2 seconds.

We can see that node1 is now the primary for redundancy group 1, which holds our interfaces:

root@FW01A> show chassis cluster status redundancy-group 1
Cluster ID: 1
Node                  Priority          Status    Preempt  Manual failover

Redundancy group: 1 , Failover count: 3
    node0                   0           secondary      yes      no
    node1                   1           primary        yes      no

{primary:node0}

In the next article, I will dive a bit deeper and integrate the SRX cluster in a real-world topology.

JNCIS-SEC – Custom IDP policies on the Juniper SRX

Activating Intrusion Detection and Prevention (or IPS, as all the cool kids call it) on the SRX is quite straightforward in itself, but turning it on and relying solely on the default IDP policies is not going to cut it. While doing my own tests, running some scriptkiddie attacks against a virtual machine, I was expecting lots of sirens but unfortunately, the SRX stayed silent. No detection and definitely no prevention. As I quickly gathered, you actually need to put some thought in the services you are exposing and the kind of attacks you can expect, and then craft your own policies with the specific signatures.

For this one, I’ll be creating a custom IDP rule on the vSRX. I’m assuming you have already set up the vSRX, activated the advanced features license and set up IDP. If not, here are some links that will put you on the right path.

Scenario

  • We are hosting an SSH/SFTP server on my public IP address of 1.1.1.10/32. Every day, there are hundreds of brute-force login attempts. Obviously, the server admin has not implemented protective measures so we will try to reduce the impact by enabling IDP.
  • Destination NAT has been configured and is translating 1.1.1.10 to internal DMZ server 10.0.4.10.

Here is the firewall configuration. IDP has already been enabled on this policy.

[edit security policies from-zone untrust to-zone dmz policy FW-SFTP-Servers]
root@NP-vSRX-01# show
match {
    source-address any;
    destination-address Grp-SFTP-Servers;
    application junos-ssh;
}
then {
    permit {
        application-services {
            idp;
        }
    }
    log {
        session-init;
    }
}

Server setup and brute-force script

I’m using this script which runs a txt file of common passwords against the username specified.

On the linux server, let’s create a user, aptly named “admin”, with one of the passwords from the text files

lab@V104-10:~$ sudo useradd admin
lab@V104-10:~$ sudo passwd admin
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully

First, running the script with the standard IDP policy of Server-Protection-1G

lab@V120-10:/tmp/brutessh$ python brutessh.py -h 1.1.1.10 -u admin -d passlist.txt

*************************************
*SSH Bruteforcer Ver. 0.2           *
*Coded by Christian Martorella      *
*Edge-Security Research             *
*laramies@gmail.com                 *
*************************************

HOST: 1.1.1.10 Username: admin Password file: passlist.txt
===========================================================================
Trying password...
miller


Auth OK ---> Password Found: changeme
tiger

Times -- > Init: 0.032772 End: 6.861204

The SRX was not paying attention during the attack -nothing in the attack table- and the server got owned.

root@NP-vSRX-01> show security idp attack table
root@NP-vSRX-01>

Rolling your own IDP rules

Unfortunately, the server guy had to be let go and we are now looking for a way to be prevent these kinds of attacks. Of course, the first thing to do would be to harden the server and enforce strong credentials, but it does help if you are notified of an ongoing attack. Even better if the SRX could also prevent or slow down the attack.

For our scenario, we are specifically looking for SSH Brute-force attacks. A good way to find out if the SRX has an on-board signature for this is to browse the attack database. You can find all these signatures in operational mode.

root@NP-vSRX-01# run show security idp attack description SSH?
Possible completions:
          Attack name
  SSH:AUDIT:SSH-V1
  SSH:AUDIT:UNEXPECTED-HEADER
  SSH:BRUTE-LOGIN
  SSH:ERROR:COOKIE-MISMATCH
  SSH:ERROR:INVALID-HEADER
  SSH:ERROR:INVALID-PKT-TYPE
  SSH:ERROR:MSG-TOO-LONG
  SSH:ERROR:MSG-TOO-SHORT
  SSH:MISC:EXPLOIT-CMDS-UNIX
  SSH:MISC:MAL-VERSION
  SSH:MISC:UNIX-ID-RESP
  SSH:NON-STD-PORT
  SSH:OPENSSH-MAXSTARTUP-DOS
  SSH:OPENSSH:BLOCK-DOS
  SSH:OPENSSH:GOODTECH-SFTP-BOF
  SSH:OPENSSH:NOVEL-NETWARE
  SSH:OVERFLOW:FREESSHD-KEY-OF
  SSH:OVERFLOW:PUTTY-VER
  SSH:OVERFLOW:SECURECRT-BOF
  SSH:PRAGMAFORT-KEY-OF
  SSH:SYSAX-MULTI-SERVER-DOS
  SSH:SYSAX-SERVER-DOS

root@NP-vSRX-01# run show security idp attack description SSH:BRUTE-LOGIN
Description: This signature detects attempts by remote attackers to log in to an SSH server by brute-forcing the password.

Another great resource for finding the right signature is this page.

If you are using one of the predefined IDP policy sets, navigate down and edit the configuration. I am using Server-Protection-1G on mine.

The policy below will detect SSH brute-force attempts and, by using the ip-action, block the source IP for 30 minutes. That should slow ’em down.

[edit security idp idp-policy Server-Protection-1G rulebase-ips]
root@NP-vSRX-01# show rule SSH-SFTP-Services
match {
    from-zone any;
    source-address any;
    to-zone any;
    destination-address any;
    application default;
    attacks {
        predefined-attacks SSH:BRUTE-LOGIN;
    }
}
then {
    action {
        close-client;
    }
    ip-action {
        ip-block;
        target source-address;
        timeout 1800;
    }
    notification {
        log-attacks;
    }
}

You can rearrange the rules by using the insert policy before

Testing the policy

Running the script for a second time, it only took a small number of brute-force attempts until the alarm triggered. After a minute or two of time-outs, the script gave up and started spawning python errors.

lab@V120-10:/tmp/brutessh$ python brutessh.py -h 1.1.1.10 -u admin -d passlist.txt

*************************************
*SSH Bruteforcer Ver. 0.2           *
*Coded by Christian Martorella      *
*Edge-Security Research             *
*laramies@gmail.com                 *
*************************************

HOST: 1.1.1.10 Username: admin Password file: passlist.txt
===========================================================================
Trying password...
abc123
Exception in thread carmen
:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "brutessh.py", line 44, in run
    t = paramiko.Transport(hostname)
  File "/tmp/brutessh/paramiko/transport.py", line 235, in __init__
    sock.connect((hostname, port))
  File "/usr/lib/python2.7/socket.py", line 224, in meth
    return getattr(self._sock,name)(*args)
error: [Errno 110] Connection timed out
Exception in thread test
:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "brutessh.py", line 44, in run
    t = paramiko.Transport(hostname)
  File "/tmp/brutessh/paramiko/transport.py", line 235, in __init__
    sock.connect((hostname, port))
  File "/usr/lib/python2.7/socket.py", line 224, in meth
    return getattr(self._sock,name)(*args)
error: [Errno 110] Connection timed out
Exception in thread password1
:

Some verification commands on the SRX itself.

root@NP-vSRX-01> show security log | match IDP:
2015-09-17 19:22:45 UTC  IDP: at 1442517764, SIG Attack log <2.2.2.1/6768->1.1.1.10/22> for TCP protocol and service SERVICE_IDP application NONE by rule 6 of rulebase IPS in policy Server-Protection-1G. attack: repeat=0, action=CLOSE_CLIENT, threat-severity=MEDIUM, name=SSH:BRUTE-LOGIN, NAT <0.0.0.0:0->10.0.4.10:0>, time-elapsed=0, inbytes=0, outbytes=0, inpackets=0, outpackets=0, intf:untrust:ge-0/0/0.0->dmz:ge-0/0/2.0, packet-log-id: 0, alert=no, username=N/A, roles=N/A and misc-message -

The SRX detected three login attempts through before shunning the remote IP

root@NP-vSRX-01> show security idp attack table
IDP attack statistics:

  Attack name                                  #Hits
  SSH:BRUTE-LOGIN                              3
root@NP-vSRX-01> show security idp attack detail SSH:BRUTE-LOGIN
Display Name: SSH: Brute Force Login Attempt
Severity: Minor
Category: SSH
Recommended: true
Recommended Action: Drop
Type: signature
Direction: CTS
False Positives: rarely
Service: SSH
Shellcode: no
Flow: control
Context: first-data-packet
Negate: false
TimeBinding:
        Scope: peer
        Count: 10
Hidden Pattern: False
Pattern: \[SSH\].*
root@NP-vSRX-01> show security idp counters action
IDP counters:

  IDP counter type                                                      Value
 None                                                                    0
 Recommended                                                             0
 Ignore                                                                  0
 Diffserv                                                                0
 Drop packet                                                             0
 Drop                                                                    0
 Close                                                                   0
 Close server                                                            0
 Close client                                                            3
 IP action rate limit                                                    0
 IP action drop                                                          3
 IP action close                                                         0
 IP action nofity                                                        0
 IP action failed                                                        0

Wrapping up

Although very basic, this example demonstrated that it absolutely pays off to invest some time in crafting your own IDP rules. Keeping a detailed inventory of the services you are hosting and matching it against the associated application-level attack signatures will greatly increase your security posture. Combined with the Screen Options, which protect against some of the L3/L4 attacks, this makes the SRX well worth considering 🙂

Route-based IPsec tunnels on the SRX

Expanding on the basic branch setup from my previous labs, I added another virtual SRX to the topology to exercise the VPN stuff. As the “ISP” router, I first tried adding a CSR-1000V to the lab but found out the hard way that it’s bandwidth throttled at 100kbps. With six virtual machines behind it sporadically fetching stuff off the internet, the entire lab came to a grinding halt. 🙂

As a quick workaround, I settled on a VyOS appliance , which was surprisingly easy to work with. It’s very similar to JunOS so I had it up and running in just a couple of minutes.

Anyway, enough about that. Here’s the network I’ll be working off to build some VPNs…

Topology

VPN Topology

Before you configure VPN tunnels, make sure that your public interface is listening for IKE traffic. This is defined on the zone or interface level.

[edit security zones security-zone untrust]
root@NP-vSRX-01# show
host-inbound-traffic {
    system-services {
        ssh;
        ping;
        ike;
    }
}
interfaces {
    ge-0/0/0.0;
}

Tunnel interface, routing and security zone

First, define a Secure Tunnel interface with IPv4 support by adding the inet family. If you forget the family inet statement, your tunnel will not pass traffic.

For simple tunnels without dynamic routing protocols, assigning an IP address is not required.

[edit interfaces]
root@NP-vSRX-01# show st0
unit 1 {
    description "Branch1 Tunnel Interface";
    family inet;
}

Create a static route for the peer subnet(s) and point it to the tunnel interface.

[edit]
root@NP-vSRX-01# set routing-options static route 10.2.0.0/24 next-hop st0.1

One thing I tend to forget is to add the st0 interface to the right security zone. Rather than putting it in the untrust zone, I will create a separate zone for VPN and put the tunnel there.

[edit security zones security-zone vpn]
root@NP-vSRX-01# show
interfaces {
    st0.1;
}

Once that is done, we can start configuring the actual VPN policy. There are a lot of components to configure, but once you know how they intertwine it’s pretty straightforward.

Configuring Phase 1

First, configure an IKE proposal. You could choose one of the JunOS templates, but where’s the fun in that, right? 🙂

[edit security ike]
root@NP-vSRX-01# show
proposal P1-Proposal-Branch1 {
    description "Branch1 Phase1 Proposal";
    authentication-method pre-shared-keys;
    authentication-algorithm sha-256;
    encryption-algorithm aes-256-cbc;
    lifetime-seconds 43200;
}

Once we have the P1 proposal, we define the phase 1 policy which in turn refers to the proposal. Unless your peer has a dynamic IP, main mode is what you need. This is also where you specify the preshared key.

[edit security ike policy P1-Policy-Branch1]
root@NP-vSRX-01# show
mode main;
description "Branch1 Phase1 Policy";
proposals P1-Proposal-Branch1;
pre-shared-key ascii-text "$9$DDH.f9CuB1hqMORhcle4aZGjq"; ## SECRET-DATA

Third and last for Phase 1 template, configure the gateway peer IP address with the corresponding policy. If possible, always use IKE version 2. The external interface is where the SRX can expect the UDP/500 packets.

[edit security ike gateway P1-Peer-Branch1]
root@NP-vSRX-01# show
ike-policy P1-Policy-Branch1;
address 2.2.2.1;
external-interface ge-0/0/0;
version v2-only;

Now that Phase1 is set, we can move on to the IPsec configuration.

Phase 2 – IPsec

Just as we did before, we first define a proposal. Choose your most secure option for encryption and hashing, and a reasonably short keying lifetime.

[edit security ipsec proposal P2-Proposal-Branch1]
root@NP-vSRX-01# show
description "Branch1 Phase2 Proposal";
protocol esp;
authentication-algorithm hmac-sha-256-128;
encryption-algorithm aes-256-cbc;
lifetime-seconds 3600;

Configure the IPsec policy that again refers to the proposal. Unless the peer does not support it, always turn on PFS.

[edit security ipsec policy P2-Policy-Branch1]
root@NP-vSRX-01# show
description "Branch1 Phase2 Policy";
perfect-forward-secrecy {
    keys group5;
}
proposals P2-Proposal-Branch1;

The final step is where the Phase 1 and 2 components are glued together. For route-based tunnels, don’t forget to bind your st interface. I prefer to negotiate the tunnels immediately rather than waiting for traffic.

[edit security ipsec vpn P2-IPsec-Branch1]
root@NP-vSRX-01# show
bind-interface st0.1;
ike {
    gateway P1-Peer-Branch1;
    ipsec-policy P2-Policy-Branch1;
}
establish-tunnels immediately;

Security Policies

From a VPN point of view, we are ready to accept IPsec connections on this box. We are listening for IKE traffic, we have a route pointing to our tunnel interface and VPN policies are configured. The only thing we need to do is configure the firewall policies.

For this one, I will be allowing traffic from the remote sites to one of the machines on the trust zone. I already created some objects in the global address book.

From VPN to Trust:

[edit security policies from-zone vpn to-zone trust]
root@NP-vSRX-01# show
policy FW-Allow-VPN-Branch {
    match {
        source-address Host-10.2.0.10;
        destination-address Host-10.0.0.10;
        application any;
    }
    then {
        permit;
        count;
    }
}

From Trust to VPN:

[edit security policies]
root@NP-vSRX-01# show from-zone trust to-zone vpn
policy FW-Allow-VPN-Branch {
    match {
        source-address Host-10.0.0.10;
        destination-address Host-10.2.0.10;
        application any;
    }
    then {
        permit;
        count;
    }
}

Configuring the Branch site SRX

The benefit of having two SRX’s is that you can easily copy-paste, edit and mirror the config.

### PHASE 1 ### 

set security ike proposal P1-Proposal-HQ description "HQ Phase1 Proposal"
set security ike proposal P1-Proposal-HQ authentication-method pre-shared-keys
set security ike proposal P1-Proposal-HQ authentication-algorithm sha-256
set security ike proposal P1-Proposal-HQ encryption-algorithm aes-256-cbc
set security ike proposal P1-Proposal-HQ lifetime-seconds 43200
set security ike policy P1-Policy-HQ mode main
set security ike policy P1-Policy-HQ description "HQ Phase1 Policy"
set security ike policy P1-Policy-HQ proposals P1-Proposal-HQ
set security ike policy P1-Policy-HQ pre-shared-key ascii-text "$9$DDH.f9CuB1hqmORhcle4aZGjq"
set security ike gateway P1-Peer-HQ ike-policy P1-Policy-HQ
set security ike gateway P1-Peer-HQ address 1.1.1.1
set security ike gateway P1-Peer-HQ external-interface ge-0/0/0

### PHASE 2 ####

set security ipsec proposal P2-Proposal-HQ description "HQ Phase2 Proposal"
set security ipsec proposal P2-Proposal-HQ protocol esp
set security ipsec proposal P2-Proposal-HQ authentication-algorithm hmac-sha-256-128
set security ipsec proposal P2-Proposal-HQ encryption-algorithm aes-256-cbc
set security ipsec proposal P2-Proposal-HQ lifetime-seconds 3600
set security ipsec policy P2-Policy-HQ description "HQ Phase2 Policy"
set security ipsec policy P2-Policy-HQ perfect-forward-secrecy keys group5
set security ipsec policy P2-Policy-HQ proposals P2-Proposal-HQ
set security ipsec vpn P2-IPsec-HQ bind-interface st0.1
set security ipsec vpn P2-IPsec-HQ ike gateway P1-Peer-HQ
set security ipsec vpn P2-IPsec-HQ ike ipsec-policy P2-Policy-HQ

#### TUNNEL INTERFACE ### 

set interfaces st0 unit 1 description "HQ Tunnel Interface"
set interfaces st0 unit 1 family inet

### HOST INBOUND TRAFFIC ### 

set security zones security-zone untrust host-inbound-traffic system-services ike

### ADD TUNNEL TO ZONE ###

set security zones security-zone untrust interfaces st0.1

### ADD ROUTE ###

set routing-options static route 10.0.0.0/24 next-hop st0.1
set routing-options static route 10.0.4.0/24 next-hop st0.1

### SECURITY POLICIES ### 

set security policies from-zone vpn to-zone branch policy FW-Allow-VPN-HQ-In match source-address Host-10.0.0.10
set security policies from-zone vpn to-zone branch policy FW-Allow-VPN-HQ-In match destination-address Host-10.2.0.10
set security policies from-zone vpn to-zone branch policy FW-Allow-VPN-HQ-In match application any
set security policies from-zone vpn to-zone branch policy FW-Allow-VPN-HQ-In then permit
set security policies from-zone vpn to-zone branch policy FW-Allow-VPN-HQ-In then count
set security policies from-zone branch to-zone vpn policy FW-Allow-VPN-HQ-Server match source-address Host-10.2.0.10
set security policies from-zone branch to-zone vpn policy FW-Allow-VPN-HQ-Server match destination-address Host-10.0.0.10
set security policies from-zone branch to-zone vpn policy FW-Allow-VPN-HQ-Server match application any
set security policies from-zone branch to-zone vpn policy FW-Allow-VPN-HQ-Server then permit
set security policies from-zone branch to-zone vpn policy FW-Allow-VPN-HQ-Server then count

Verification

On the HQ SRX, we already have a phase 1 tunnel up using IKEv2 as the exchange protocol.

root@NP-vSRX-01> show security ike security-associations
Index   State  Initiator cookie  Responder cookie  Mode           Remote Address
2441630 UP     2fb27eba88550b6b  e8b7c6a94c94f8d2  IKEv2          2.2.2.1

We also have a Phase 2 tunnel, with just under one hour of lifetime left.

root@NP-vSRX-01> show security ipsec security-associations
  Total active tunnels: 1
  ID    Algorithm       SPI      Life:sec/kb  Mon lsys Port  Gateway
  <131073 ESP:aes-cbc-256/sha256 14b32bcb 3433/ unlim - root 500 2.2.2.1
  >131073 ESP:aes-cbc-256/sha256 fef69f20 3433/ unlim - root 500 2.2.2.1

Some more details about the IPsec parameters, like Proxy IDs (0.0.0.0/0) and encryption/authentication standards.

root@NP-vSRX-01> show security ipsec security-associations index 131073 detail
  ID: 131073 Virtual-system: root, VPN Name: P2-IPsec-Branch1
  Local Gateway: 1.1.1.1, Remote Gateway: 2.2.2.1
  Local Identity: ipv4_subnet(any:0,[0..7]=0.0.0.0/0)
  Remote Identity: ipv4_subnet(any:0,[0..7]=0.0.0.0/0)
  Version: IKEv2
    DF-bit: clear
    Bind-interface: st0.1

  Port: 500, Nego#: 32, Fail#: 0, Def-Del#: 0 Flag: 0x600a29
  Last Tunnel Down Reason: Delete payload received
    Direction: inbound, SPI: 14b32bcb, AUX-SPI: 0
                              , VPN Monitoring: -
    Hard lifetime: Expires in 3339 seconds
    Lifesize Remaining:  Unlimited
    Soft lifetime: Expires in 2752 seconds
    Mode: Tunnel(0 0), Type: dynamic, State: installed
    Protocol: ESP, Authentication: hmac-sha256-128, Encryption: aes-cbc (256 bits)
    Anti-replay service: counter-based enabled, Replay window size: 64

    Direction: outbound, SPI: fef69f20, AUX-SPI: 0
                              , VPN Monitoring: -
    Hard lifetime: Expires in 3339 seconds
    Lifesize Remaining:  Unlimited
    Soft lifetime: Expires in 2752 seconds
    Mode: Tunnel(0 0), Type: dynamic, State: installed
    Protocol: ESP, Authentication: hmac-sha256-128, Encryption: aes-cbc (256 bits)
    Anti-replay service: counter-based enabled, Replay window size: 64

A good way to verify that there’s two-way communication is to inspect the IPsec counters

root@NP-vSRX-01> show security ipsec statistics index 131073
ESP Statistics:
  Encrypted bytes:        140965904
  Decrypted bytes:       3342178434
  Encrypted packets:        1132200
  Decrypted packets:        2245688
AH Statistics:
  Input bytes:                    0
  Output bytes:                   0
  Input packets:                  0
  Output packets:                 0
Errors:
  AH authentication failures: 0, Replay errors: 251
  ESP authentication failures: 0, ESP decryption failures: 0
  Bad headers: 0, Bad trailers: 0

The ultimate test is to verify on the internal machines. Note the asterisks on the tunnel segment. If there’s an address assigned to the st interface, this one would show in the trace results.

root@Branch-vSRX-01> ssh lab@10.2.0.10
lab@10.2.0.10's password:
Welcome to Ubuntu 14.10 (GNU/Linux 3.16.0-23-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

Last login: Thu Sep 10 22:27:33 2015
lab@V120-10:~$ ping 10.0.0.10
PING 10.0.0.10 (10.0.0.10) 56(84) bytes of data.
64 bytes from 10.0.0.10: icmp_seq=1 ttl=62 time=10.3 ms
64 bytes from 10.0.0.10: icmp_seq=2 ttl=62 time=13.5 ms
64 bytes from 10.0.0.10: icmp_seq=3 ttl=62 time=11.6 ms
^C
--- 10.0.0.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 10.370/11.869/13.567/1.318 ms

lab@V120-10:~$ traceroute 10.0.0.10
traceroute to 10.0.0.10 (10.0.0.10), 30 hops max, 60 byte packets
 1  10.2.0.1 (10.2.0.1)  3.864 ms  3.912 ms  4.003 ms
 2  * * *
 3  10.0.0.10 (10.0.0.10)  13.682 ms  13.679 ms  13.666 ms
lab@V120-10:~$

That’s it, a fully functional, although very basic, route-based VPN.

JNCIS-SEC – SRX Static NAT Configuration

I already went into source and destination NAT in the last few posts. The last one I wanted to lab up is Static NAT, which creates a one-to-one mapping between two addresses. The translation is bidirectional and ports are not translated. On the downside, it does require you to sacrifice one public IP per internal server which is not always the most cost-efficient method.

Here’s what I’ll be working on:

SRX NAT topology

The following Static NAT entries will be configured:

  • Server 10.0.100.10 will be translated to 1.1.1.10
  • Server 10.0.100.11 will be translated to 1.1.1.11

Using the Global Address Book

Instead of using regular IP prefixes as matching conditions, I will be using address book entries in the NAT config. Referring to the same names in both security policy and NAT makes interpreting and troubleshooting the config much more straightforward.

First, navigate over to the global address book under Security, and enter addresses as you would under the zones. I am creating objects for both the public and private addresses:

[edit security address-book global]
root@NP-vSRX-01# show
address Host-Private-10.0.100.10 10.0.100.10/32;
address Host-Private-10.0.100.11 10.0.100.11/32;
address Host-Public-1.1.1.10 1.1.1.10/32;
address Host-Public-1.1.1.11 1.1.1.11/32;

Caution, when you start adding entries to the global address book, the SRX will start spawning this error when trying a commit:

[edit security zones security-zone untrust]
root@NP-vSRX-01# show
##
## Warning: Zone specific address books are not allowed when there are global address books defined
##

From the looks of it, you will need to move all your zone address books over to the global one, so make sure you start off right on a new SRX. Worst case, you can still match on regular IP prefixes for NAT.

Next will be to define a ruleset, and create some rules. One thing to keep in mind is that you configure the NAT rule to match on the destination address to perform the NAT. Implicitly, the SRX will then create a reverse source-NAT mapping. Just imagine the traffic entering your network, and to what address it should be translated, and the SRX will take care of the reverse traffic for you.

Here is the configuration for the first host, referencing the object name by using destination-address-name:

[edit security nat static]
root@NP-vSRX-01# show
rule-set Static-From-Untrust {
    from zone untrust;
    rule NAT-DMZ-10 {
        match {
            destination-address-name Host-Public-1.1.1.10;
        }
        then {
            static-nat {
                prefix-name {
                    Host-Private-10.0.100.10;
                }
            }
        }
    }
}

Copy paste for the second host:

rule NAT-DMZ-11 {
    match {
        destination-address-name Host-Public-1.1.1.11;
    }
    then {
        static-nat {
            prefix-name {
                Host-Private-10.0.100.11;
            }
        }
    }
}

Verification

The most useful command for Static NAT verification is show security nat static rule, which shows you both a config overview and a hit count.

root@NP-vSRX-01> show security nat static rule all
Total static-nat rules: 2
Total referenced IPv4/IPv6 ip-prefixes: 4/0

Static NAT rule: NAT-DMZ-10           Rule-set: Static-From-Untrust
  Rule-Id                    : 1
  Rule position              : 1
  From zone                  : untrust
  Destination addresses      : Host-Public-1.1.1.10
  Host addresses             : Host-Private-10.0.100.10
  Netmask                    : 32
  Host routing-instance      : N/A
  Translation hits           : 40
    Successful sessions      : 34
    Failed sessions          : 6
  Number of sessions         : 0

Static NAT rule: NAT-DMZ-11           Rule-set: Static-From-Untrust
  Rule-Id                    : 2
  Rule position              : 2
  From zone                  : untrust
  Destination addresses      : Host-Public-1.1.1.11
  Host addresses             : Host-Private-10.0.100.11
  Netmask                    : 32
  Host routing-instance      : N/A
  Translation hits           : 112
    Successful sessions      : 112
    Failed sessions          : 0
  Number of sessions         : 0

To emphasize that no PAT is performed, here is an output of the session table. We are using port 50973 on both ends of the firewall.

Session ID: 15148, Policy name: FW-PermitWeb/7, Timeout: 4, Valid
  In: 10.0.100.11/50973 --> 91.189.92.200/80;tcp, If: ge-0/0/1.0, Pkts: 26, Bytes: 5673
  Out: 91.189.92.200/80 --> 1.1.1.11/50973;tcp, If: ge-0/0/0.0, Pkts: 25, Bytes: 5863

JNCIS-SEC – Port forwarding on the SRX

A common technique to obscure services from network probing is to host them on ports outside of the well-known ports range. This might help as a first defense, but in reality the ports are still there for anyone who steps out of the defaults. It also makes life harder for other firewall engineers 🙂 On the other hand, if your ISP is blocking services hosted in the well-known range, it might be your only option.

As in all the NAT examples, here is the topology:

SRX NAT topology

We are hosting an SFTP server, but are fed up with all the bruteforce attacks, so decide to host it on a different port. Traffic coming from the untrust zone, destined for 1.1.1.11 and arriving at TCP port 2222, will be translated to port 22 on the inside.

First, we define the real, internal port at the DNAT pool, together with the internal IP.

[edit security nat destination]
root@NP-vSRX-01# show pool DNAT-Host100-11
address 10.0.100.11/32 port 22;

Then, we go to our rule set, which defined the traffic direction (from untrust) and enter the address and port on which it should be listening, plus the pool to translate to:

[edit security nat destination rule-set DNAT-From-Untrust]
root@NP-vSRX-01# show
from zone untrust;
rule DNAT-Host100-11 {
    match {
        destination-address 1.1.1.11/32;
        destination-port {
            2222;
        }
    }
    then {
        destination-nat {
            pool {
                DNAT-Host100-11;
            }
        }
    }
}

That’s it. After doing a telnet to port 2222, we see the following flow in the table:

Session ID: 167059, Policy name: FW-SSH-Server/6, Timeout: 1778, Valid
  In: 10.6.60.68/62340 --> 1.1.1.11/2222;tcp, If: ge-0/0/0.0, Pkts: 3, Bytes: 132
  Out: 10.0.100.11/22 --> 10.6.60.68/62340;tcp, If: ge-0/0/1.0, Pkts: 2, Bytes: 126

Port forwarding with a dynamic IP

If you are running a dynamic IP address, it won’t be possible to define one IP as a match condition. Unless your ISP is always handing out addresses in one specific range, the only option is to define the destination-address as 0.0.0.0/0

All config stays the same, but the DNAT rule looks like this:

[edit security nat destination rule-set DNAT-From-Untrust rule DNAT-Host100-11]
root@NP-vSRX-01# show
match {
    destination-address 0.0.0.0/0;
    destination-port {
        2222;
    }
}
then {
    destination-nat {
        pool {
            DNAT-Host100-11;
        }
    }
}

JNCIS-SEC Lab – Destination NAT

In this lab, we will look at configuring the SRX to translate the destination field for incoming traffic, which is widely used for public servers in a DMZ.

As with all the NAT examples, I’ll be using the following topology:

SRX NAT topology

These are the requirements for the translations:

  • Only applies to traffic coming from the internet (untrust zone)
  • Destination 1.1.1.10/32 will be translated to DMZ IP 10.0.100.10/32
  • Destination 1.1.1.11/32 will be translated to DMZ IP 10.0.100.11/32

First, I will configure security policies for the following services:

  • Server 10.0.100.10 is hosting a Telnet server
  • Server 10.0.100.11 is an SSH/SFTP server

Because DNAT happens before policy lookup, we always refer to the address as it will be after translation.

[edit security policies from-zone untrust to-zone dmz]
root@NP-vSRX-01# show
policy FW-Telnet-Server {
    match {
        source-address any;
        destination-address Host-10.0.100.10-32;
        application junos-telnet;
    }
    then {
        permit;
        log {
            session-close;
        }
    }
}
policy FW-SSH-Server {
    match {
        source-address any;
        destination-address Host-10.0.100.11-32;
        application junos-ssh;
    }
    then {
        permit;
        log {
            session-close;
        }
    }
}

As this is a fresh config, we will first define a rule-set that specifies the traffic direction. Destination NAT does not give you the option to specify the to-zone, just the source zone.

[edit security nat destination]
root@NP-vSRX-01# show
rule-set DNAT-From-Untrust {
    from zone untrust;
}

Now, let’s add the two pools:

[edit security nat destination]
root@NP-vSRX-01# show
pool DNAT-Host100-11 {
    address 10.0.100.11/32;
}
pool DNAT-Host100-10 {
    address 10.0.100.10/32;
}

And here are the NAT rules themselves:

[edit security nat destination rule-set DNAT-From-Untrust]
rule DNAT-Host100-10 {
    match {
        destination-address 1.1.1.10/32;
    }
    then {
        destination-nat {
            pool {
                DNAT-Host100-10;
            }
        }
    }
}
rule DNAT-Host100-11 {
    match {
        destination-address 1.1.1.11/32;
    }
    then {
        destination-nat {
            pool {
                DNAT-Host100-11;
            }
        }
    }
}

Because the upstream router is in the same segment as our firewall, we will also need to add ProxyARP to the global NAT config.

[edit security nat]
root@NP-vSRX-01# set proxy-arp interface ge-0/0/0.0 address 1.1.1.10/32 to 1.1.1.11/32

Verification

After starting a telnet and SSH session from my PC, this is the session table:

Session ID: 166748, Policy name: FW-SSH-Server/6, Timeout: 1792, Valid
  In: 10.6.66.68/59016 --> 1.1.1.11/22;tcp, If: ge-0/0/0.0, Pkts: 3, Bytes: 132
  Out: 10.0.100.11/22 --> 10.6.66.68/59016;tcp, If: ge-0/0/1.0, Pkts: 2, Bytes: 126
Session ID: 166759, Policy name: FW-Telnet-Server/5, Timeout: 1798, Valid
  In: 10.6.66.68/60731 --> 1.1.1.10/23;tcp, If: ge-0/0/0.0, Pkts: 11, Bytes: 510
  Out: 10.0.100.10/23 --> 10.6.66.68/60731;tcp, If: ge-0/0/1.0, Pkts: 14, Bytes: 652
root@NP-vSRX-01> show security nat destination summary
Total pools: 2
Pool name            Address                           Routing        Port  Total
                     Range                             Instance             Address
DNAT-Host100-11      10.0.100.11    - 10.0.100.11                     0     1
DNAT-Host100-10      10.0.100.10    - 10.0.100.10                     0     1

Total rules: 2
Rule name            Rule set       From                               Action
DNAT-Host100-11      DNAT-From-Untrust untrust                         DNAT-Host100-11
DNAT-Host100-10      DNAT-From-Untrust untrust                         DNAT-Host100-10
root@NP-vSRX-01> show security nat destination pool all
Total destination-nat pools: 2

Pool name       : DNAT-Host100-11
Pool id         : 3
Total address   : 1
Translation hits: 1
Address range                        Port
    10.0.100.11 - 10.0.100.11           0

Pool name       : DNAT-Host100-10
Pool id         : 4
Total address   : 1
Translation hits: 8
Address range                        Port
    10.0.100.10 - 10.0.100.10           0

JNCIS-SEC Lab – Pool-based Source NAT

In the previous article, I configured the SRX to translate outgoing traffic to the external interface IP. In this article, we will look at a second way of configuring Source NAT, using a NAT address pool.

Again, I will be using the same topology:

SRX NAT topology

For the Source NAT with address pool, this is the requirement:

  • Traffic from the hosts in range 10.0.200.0/24
  • Destined to the untrust zone (the internet)
  • will be SNAT’ed to a pool with one IP address, 1.1.1.2/32

The firewall policies from the previous article, which allow basic web access, are still in place:

root@NP-vSRX-01# show security policies from-zone trust to-zone untrust
policy FW-PermitWeb {
    match {
        source-address Net-10.0.200.0-24;
        destination-address any;
        application [ junos-http junos-https junos-dns-udp ];
    }
    then {
        permit;
        log {
            session-close;
        }
    }
}

First, configure a rule-set that defines the traffic direction:

[edit security nat source]
root@NP-vSRX-01# show
rule-set NAT-Trust-to-Internet {
    from zone trust;
    to zone untrust;
}

Then, create the address pool:

[edit security nat source]
root@NP-vSRX-01# set pool SNAT-Pool-Trust-to-Internet address 1.1.1.2/32

Next, configure the NAT rule based on the requirements:

[edit security nat source rule-set NAT-Trust-to-Internet rule NAT-Source-VLAN200]
root@NP-vSRX-01# show
match {
    source-address 10.0.200.0/24;
}
then {
    source-nat {
        pool {
            SNAT-Pool-Trust-to-Internet;
        }
    }
}

After a commit, the SRX is correctly translating the traffic to 1.1.1.2 (Out > In traffic):

root@NP-vSRX-01> show security flow session source-prefix 10.0.200.10/32
Session ID: 38803, Policy name: FW-PermitWeb/4, Timeout: 42, Valid
  In: 10.0.200.10/23772 --> 8.8.8.8/53;udp, If: ge-0/0/2.0, Pkts: 1, Bytes: 63
  Out: 8.8.8.8/53 --> 1.1.1.2/15049;udp, If: ge-0/0/0.0, Pkts: 0, Bytes: 0

### omitted for brevity ###

Session ID: 38809, Policy name: FW-PermitWeb/4, Timeout: 42, Valid
  In: 10.0.200.10/15346 --> 8.8.8.8/53;udp, If: ge-0/0/2.0, Pkts: 1, Bytes: 67
  Out: 8.8.8.8/53 --> 1.1.1.2/13144;udp, If: ge-0/0/0.0, Pkts: 0, Bytes: 0
Total sessions: 7

Unfortunately, the machine does not have internet access yet. As we are translating to an address other than our interface IP, the upstream router does not have an ARP entry for it. To solve this problem, we could add a static route on the “ISP” router, or configure Proxy-ARP on the SRX. In the real world, getting an ISP to make changes would take days so let’s do Proxy ARP.

First, find the interface which has our public IP:

[edit security nat]
root@NP-vSRX-01# run show interfaces terse | match 1.1.1.1
ge-0/0/0.0              up    up   inet     1.1.1.1/28

Then, configure the proxy-arp as a global NAT command:

[edit security nat]
root@NP-vSRX-01# set proxy-arp interface ge-0/0/0.0 address 1.1.1.2

Let’s see if we now have more sessions from the clients:

root@NP-vSRX-01# run show security flow session source-prefix 10.0.200.0/24
Session ID: 38967, Policy name: FW-PermitWeb/4, Timeout: 1792, Valid
  In: 10.0.200.10/54278 --> 54.149.61.73/443;tcp, If: ge-0/0/2.0, Pkts: 10, Bytes: 1265
  Out: 54.149.61.73/443 --> 1.1.1.2/4408;tcp, If: ge-0/0/0.0, Pkts: 8, Bytes: 3905

Session ID: 38971, Policy name: FW-PermitWeb/4, Timeout: 292, Valid
  In: 10.0.200.10/52289 --> 91.189.89.88/80;tcp, If: ge-0/0/2.0, Pkts: 7, Bytes: 1145
  Out: 91.189.89.88/80 --> 1.1.1.2/29288;tcp, If: ge-0/0/0.0, Pkts: 5, Bytes: 952

Session ID: 38975, Policy name: FW-PermitWeb/4, Timeout: 298, Valid
  In: 10.0.200.10/42145 --> 93.184.220.29/80;tcp, If: ge-0/0/2.0, Pkts: 7, Bytes: 1250
  Out: 93.184.220.29/80 --> 1.1.1.2/24688;tcp, If: ge-0/0/0.0, Pkts: 5, Bytes: 1844

Session ID: 38986, Policy name: FW-PermitWeb/4, Timeout: 1792, Valid
  In: 10.0.200.10/39442 --> 68.232.34.191/443;tcp, If: ge-0/0/2.0, Pkts: 9, Bytes: 1144
  Out: 68.232.34.191/443 --> 1.1.1.2/5689;tcp, If: ge-0/0/0.0, Pkts: 16, Bytes: 14208

### omitted for brevity ###

Session ID: 39167, Policy name: FW-PermitWeb/4, Timeout: 1792, Valid
  In: 10.0.200.10/42964 --> 37.252.170.5/443;tcp, If: ge-0/0/2.0, Pkts: 7, Bytes: 1577
  Out: 37.252.170.5/443 --> 1.1.1.2/6620;tcp, If: ge-0/0/0.0, Pkts: 6, Bytes: 1515

Session ID: 39169, Policy name: FW-PermitWeb/4, Timeout: 1792, Valid
  In: 10.0.200.10/42966 --> 37.252.170.5/443;tcp, If: ge-0/0/2.0, Pkts: 8, Bytes: 1641
  Out: 37.252.170.5/443 --> 1.1.1.2/25579;tcp, If: ge-0/0/0.0, Pkts: 7, Bytes: 1636

Session ID: 39172, Policy name: FW-PermitWeb/4, Timeout: 1794, Valid
  In: 10.0.200.10/45036 --> 37.252.170.182/443;tcp, If: ge-0/0/2.0, Pkts: 11, Bytes: 3599
  Out: 37.252.170.182/443 --> 1.1.1.2/32160;tcp, If: ge-0/0/0.0, Pkts: 10, Bytes: 6143
Total sessions: 55

Now that it’s working and we have HTTP and HTTPS sessions established, here is the full configuration again:

[edit security nat]
root@NP-vSRX-01# show
source {
    pool SNAT-Pool-Trust-to-Internet {
        address {
            1.1.1.2/32;
        }
    }
    rule-set NAT-Trust-to-Internet {
        from zone trust;
        to zone untrust;
        rule NAT-Source-VLAN200 {
            match {
                source-address 10.0.200.0/24;
            }
            then {
                source-nat {
                    pool {
                        SNAT-Pool-Trust-to-Internet;
                    }
                }
            }
        }
    }
}
proxy-arp {
    interface ge-0/0/0.0 {
        address {
            1.1.1.2/32;
        }
    }
}

Translating to a range, with PAT

When using Port Address Translation, using one IP address gives us a theoretical 65.536 available ports (less are used for outbound connections), which means an equal amount of concurrent sessions. When we are nearing the limit with one IP address, we can add more addresses to the pool.

Suppose that we want to add 1.1.1.3/32 to the mix, we have two options. We can change the mask to /31:

[edit security nat source pool SNAT-Pool-Trust-to-Internet]
root@NP-vSRX-01# show
address {
    1.1.1.2/31;
}

Or, we can specify a from-to range:

[edit security nat source pool SNAT-Pool-Trust-to-Internet]
root@NP-vSRX-01# show
address {
    1.1.1.2/32 to 1.1.1.3/32;
}

After a commit, the flow table shows it’s translating to both 1.1.1.2 and 1.1.1.3 for one of my lab machines:

[edit security nat source pool SNAT-Pool-Trust-to-Internet]
root@NP-vSRX-01# run show security flow session
Session ID: 123095, Policy name: FW-PermitWeb/4, Timeout: 52, Valid
  In: 10.0.200.11/10224 --> 8.8.8.8/53;udp, If: ge-0/0/2.0, Pkts: 3, Bytes: 189
  Out: 8.8.8.8/53 --> 1.1.1.3/15041;udp, If: ge-0/0/0.0, Pkts: 0, Bytes: 0

  ### omitted ### 
  
Session ID: 123104, Policy name: FW-PermitWeb/4, Timeout: 2, Valid
  In: 10.0.200.11/24055 --> 8.8.8.8/53;udp, If: ge-0/0/2.0, Pkts: 1, Bytes: 67
  Out: 8.8.8.8/53 --> 1.1.1.2/26041;udp, If: ge-0/0/0.0, Pkts: 1, Bytes: 179

Session ID: 123105, Policy name: FW-PermitWeb/4, Timeout: 58, Valid
  In: 10.0.200.11/57407 --> 8.8.8.8/53;udp, If: ge-0/0/2.0, Pkts: 1, Bytes: 65
  Out: 8.8.8.8/53 --> 1.1.1.3/2018;udp, If: ge-0/0/0.0, Pkts: 0, Bytes: 0

Session ID: 123106, Policy name: FW-PermitWeb/4, Timeout: 2, Valid
  In: 10.0.200.11/46570 --> 8.8.8.8/53;udp, If: ge-0/0/2.0, Pkts: 1, Bytes: 67
  Out: 8.8.8.8/53 --> 1.1.1.2/11394;udp, If: ge-0/0/0.0, Pkts: 1, Bytes: 123

Session ID: 123107, Policy name: FW-PermitWeb/4, Timeout: 18, Valid
  In: 10.0.200.11/33438 --> 91.189.92.201/80;tcp, If: ge-0/0/2.0, Pkts: 2, Bytes: 120
  Out: 91.189.92.201/80 --> 1.1.1.3/17036;tcp, If: ge-0/0/0.0, Pkts: 0, Bytes: 0
Total sessions: 9

Unfortunately, this doesn’t always work flawlessly. We can instruct JunOS to always NAT one particular client to the same external IP by adding the global source-NAT command address-persistent

[edit security nat source]
root@NP-vSRX-01# set address-persistent

[edit security nat source]
root@NP-vSRX-01# commit
commit complete

Disabling Port Address Translation

By default, the SRX will translate the outgoing port to a random number. We can disable this by adding port no-translation to the pool configuration.

Assume the following configuration:

  • The previous SNAT pool of 1.1.1.2/31 will remain as configured but PAT will be disabled
  • If the SRX runs out of available ports, we will PAT to the interface IP. This is referred to as an overflow pool.

SourceNAT configuration:

[edit security nat source pool SNAT-Pool-Trust-to-Internet]
root@NP-vSRX-01# show
address {
    1.1.1.2/32 to 1.1.1.3/32;
}
port {
    no-translation;
}
overflow-pool interface;

As shown below, the outgoing port stays the same on both the ingress and egress session.

oot@NP-vSRX-01# run show security flow session
Session ID: 164827, Policy name: FW-PermitWeb/4, Timeout: 290, Valid
  In: 10.0.200.10/48125 --> 91.189.91.23/80;tcp, If: ge-0/0/2.0, Pkts: 25, Bytes: 5621
  Out: 91.189.91.23/80 --> 1.1.1.2/48125;tcp, If: ge-0/0/0.0, Pkts: 24, Bytes: 5768[edit]


To conclude, here are some show commands that will help during config and troubleshooting

root@NP-vSRX-01> show security nat source summary
Total port number usage for port translation pool: 64512
Maximum port number for port translation pool: 33554432
Total pools: 1
Pool                 Address                  Routing              PAT  Total
Name                 Range                    Instance                  Address
SNAT-Pool-Trust-to-Internet 1.1.1.2-1.1.1.2   default              yes  1

Total rules: 1
Rule name          Rule set       From              To                   Action
NAT-Source-VLAN200 NAT-Trust-to-Internet trust      untrust              SNAT-Pool-Trust-to-Internet
root@NP-vSRX-01> show security nat source pool SNAT-Pool-Trust-to-Internet

Pool name          : SNAT-Pool-Trust-to-Internet
Pool id            : 4
Routing instance   : default
Host address base  : 0.0.0.0
Port               : [1024, 63487]
Twin port          : [63488, 65535]
Port overloading   : 1
Address assignment : no-paired
Total addresses    : 1
Translation hits   : 275
Address range                        Single Ports   Twin Ports
            1.1.1.2 - 1.1.1.2            1              0
root@NP-vSRX-01> show security nat source rule all
Total rules: 1
Total referenced IPv4/IPv6 ip-prefixes: 1/0

source NAT rule: NAT-Source-VLAN200   Rule-set: NAT-Trust-to-Internet
  Rule-Id                    : 1
  Rule position              : 1
  From zone                  : trust
  To zone                    : untrust
  Match
    Source addresses         : 10.0.200.0      - 10.0.200.255
  Action                        : SNAT-Pool-Trust-to-Internet
    Persistent NAT type         : N/A
    Persistent NAT mapping type : address-port-mapping
    Inactivity timeout          : 0
    Max session number          : 0
  Translation hits           : 275
    Successful sessions      : 275
    Failed sessions          : 0
  Number of sessions         : 1

JNCIS-SEC Lab – Interface NAT on the SRX

In this NAT configuration example I will be configuring Interface Network Adress Translation on the Juniper SRX, which will translate the source address of the original packets to the external interface addresss of the SRX.

This is the topology I will be using for all NAT configurations.

SRX NAT topology

These are the requirements for the configuration:

  • Traffic from the hosts in range 10.0.200.0/24
  • Destined to the untrust zone (the internet)
  • will be SNAT’ed to external interface IP of 1.1.1.1/32

First, I will configure an address book object for the network range.

[edit security zones security-zone trust]
root@NP-vSRX-01# set address-book address Net-10.0.200.0-24 10.0.200.0/24

And configure a security policy that allows http, https and dns-udp to the internet (any).

[edit security policies from-zone trust to-zone untrust policy FW-PermitWeb]
root@NP-vSRX-01# show
match {
    source-address Net-10.0.200.0-24;
    destination-address any;
    application [ junos-http junos-https junos-dns-udp ];
}
then {
    permit;
    log {
        session-close;
    }
}

To define the source NAT, I will first create a rule set that is specific for this zone pair.

Note – Rule-sets are where you will group different NAT rules based on traffic direction. You can match on interface, zone and routing-instance, as displayed below. When two rule-sets match for a particular traffic flow, the most specific one will be preferred, with interface being the most specific, then zones and finally routing-instances.

[edit security nat source rule-set NAT-Trust-to-Internet]
root@NP-vSRX-01# set from ?
Possible completions:
+ interface            Source interface list
+ routing-instance     Source routing instance list
+ zone                 Source zone list

The rule-set for this zone pair:

[edit security nat source]
root@NP-vSRX-01# show
rule-set NAT-Trust-to-Internet {
    from zone trust;
    to zone untrust;
}

And here is the NAT rule I have defined:

[edit security nat source rule-set NAT-Trust-to-Internet]
root@NP-vSRX-01# show rule NAT-Source-VLAN200
match {
    source-address 10.0.200.0/24;
}
then {
    source-nat {
        interface;
    }
}

I could have also defined to match on 0.0.0.0/0 as the destination address, but that would just have been one more line of code.

Verifiying the translations:

root@NP-vSRX-01> show security flow session source-prefix 10.0.200.0/24
Session ID: 17713, Policy name: FW-PermitWeb/4, Timeout: 2, Valid
  In: 10.0.200.10/49751 --> 8.8.8.8/53;udp, If: ge-0/0/2.0, Pkts: 1, Bytes: 55
  Out: 8.8.8.8/53 --> 1.1.1.1/29242;udp, If: ge-0/0/0.0, Pkts: 1, Bytes: 71

Session ID: 17714, Policy name: FW-PermitWeb/4, Timeout: 2, Valid
  In: 10.0.200.10/49751 --> 8.8.4.4/53;udp, If: ge-0/0/2.0, Pkts: 1, Bytes: 55
  Out: 8.8.4.4/53 --> 1.1.1.1/13555;udp, If: ge-0/0/0.0, Pkts: 1, Bytes: 71
Total sessions: 2

We see an internal traffic flow from 10.0.200.10 going to 8.8.8.8 and 8.8.4.4 (IN). The return traffic (OUT) is being sent to a translated port on 1.1.1.1, the interface IP.
This means that the NAT is working as required.

For a brief summary of the NAT configuration, enter the following:

root@NP-vSRX-01> show security nat source summary
Total port number usage for port translation pool: 0
Maximum port number for port translation pool: 33554432
Total pools: 0

Total rules: 1
Rule name          Rule set       From              To                   Action
NAT-Source-VLAN200 NAT-Trust-to-Internet trust      untrust              interface

And to view even more detail and some statistics about the rule:

root@NP-vSRX-01> show security nat source rule NAT-Source-VLAN200

source NAT rule: NAT-Source-VLAN200   Rule-set: NAT-Trust-to-Internet
  Rule-Id                    : 1
  Rule position              : 1
  From zone                  : trust
  To zone                    : untrust
  Match
    Source addresses         : 10.0.200.0      - 10.0.200.255
  Action                        : interface
    Persistent NAT type         : N/A
    Persistent NAT mapping type : address-port-mapping
    Inactivity timeout          : 0
    Max session number          : 0
  Translation hits           : 604
    Successful sessions      : 604
    Failed sessions          : 0
  Number of sessions         : 0