Project

General

Profile

Actions

Bug #7313

closed

Crazy behviour of Virtual IP

Added by Krzysztof Szczesniak almost 8 years ago. Updated almost 4 years ago.

Status:
Closed
Priority:
Very Low
Assignee:
-
Category:
High Availability
Target version:
-
Start date:
02/24/2017
Due date:
% Done:

0%

Estimated time:
Plus Target Version:
Release Notes:
Affected Version:
2.3.3
Affected Architecture:

Description

Hello,

We are using PFSense cluster in our environment (both nodes are running version 2.3.2-p1).
We have are using 2x WAN and several public IP Adresses (from both WAN (interface em0) and WAN2 (interface em2)) are configured for HighAvailability with Virtual IPs, which are targeting CARP interface, which is configured for each node
The problem is, that everything works OK as long, as we have less than 4 VIPs configured on WAN2
After adding 4th VIP and forcing Node 1 failover to Node 2, crazy thing start happening (but not always):
- we have (some or one) IP Alias from WAN assigned to WAN2 (this can be checked directly in freebsd using ifconfig)
- WAN on node 1 (which works currently as BACKUP), is set as MASTER for CARP vhid1(WAN) and CARP vhid12 (which should be on WAN2)

In general, the services are no longer reachable for public. After failover, the misconfigured IPAliases are not responding correctly to ARP: WhoHas, as the response is sent from completly wrong interface.
After fixing everything manually (deleting IP addresses from wrong interface and addidng them co correct one from FreeBSD level) everything works as it should. Until next failover... then we have mess again..

We managed to reproduce the problem on with the following versions:
- 2.3.2-p1
- 2.3.3
- 2.3.4 devel
- 2.4.0 next major version

The problem does not exist if we replace IPAliases with CARP (pointing to CARP interfaces)

I do not want to publish my backup file to public, however I can send them via PM (I need to keep public IPs etc...)
I can also send pictures, which show incorrect setup after failover (again using PM)
It is also possible to get access to our LAB, where the problem can be reproduced

Please let me know if you are aware of the probem and if you need anything else from me
Thank you all for the excellent work you are doing!

Actions #1

Updated by Marcos M almost 4 years ago

  • Status changed from New to Feedback
  • Priority changed from Normal to Very Low

This was likely due to inconsistent interface and/or port names across the nodes. Setting to feedback for now, then closing.

Actions #2

Updated by Marcos M almost 4 years ago

  • Status changed from Feedback to Closed
Actions

Also available in: Atom PDF