pfSense bugtracker: Issueshttps://redmine.pfsense.org/https://redmine.pfsense.org/favicon.ico?16780521162024-01-18T01:47:04ZpfSense bugtracker
Redmine pfSense Packages - Bug #15172 (New): Tailscale interface goes down without reasonhttps://redmine.pfsense.org/issues/151722024-01-18T01:47:04ZCarlos Montalvo J.
<p>Tailscale on pfSense 2.7.2-RELEASE (tailscale package v0.1.4 [tailscale-1.54.0])</p>
<p>On a VM (Proxmox v8.x (lastest with OpenVSwitch)) VMXNET interfaces.<br />Service Watchdog should restart the VPN, but it doesn't... (Does not look at the interface status)<br /><img src="https://redmine.pfsense.org/attachments/download/5855/clipboard-202401172043-aqnjt.png" title="Kernel logs" alt="Kernel logs" /><br /><img src="https://redmine.pfsense.org/attachments/download/5857/clipboard-202401172044-hk5yq.png" title="Service watchdog config" alt="Service watchdog config" /></p> pfSense Packages - Bug #15100 (New): Tailscale IPv6 Exit Node uses first LAN interface when WAN i...https://redmine.pfsense.org/issues/151002023-12-17T03:04:21ZKris Phillips
<p>When Tailscale on pfSense Plus is being used as an exit node for IPv6 connectivity and the WAN interface is set to "Only request an IPv6 prefix, do not request an IPv6 address", it will use the first sequential LAN interface's IPv6 address for outbound connectivity instead. We should probably add an option to Tailscale to select which interface for WAN connectivity is used for the NAT address for IPv4 and IPv6 for outbound connectivity, because this resulted in my internal, secure work VLAN address being used when I had routing policies in Tailscale to only allow access to my home VLAN instead (due to the fact that the work VLAN was the first sequential LAN). Not being able to choose the interface that is used for NAT on the exit node could lead to certain situations where access to resources that shouldn't be is possible under certain circumstances.</p> pfSense - Bug #15098 (New): Wireguard crashes on boot if PPPoE is the default gatewayhttps://redmine.pfsense.org/issues/150982023-12-15T20:06:02ZOskar Stroka
<p>This only seems to happen after a fresh boot, and only if any PPPoE connection is the default gateway. <br />Even the service watchdog can't bring wireguard back up. <br />The workaround is to go to "Status" - "Interfaces", disconnect the PPPoE line and enable it again. <br />After that wireguard will start without a problem.<br />I've only noticed this issue after moving to newer / better hardware.</p> pfSense - Bug #15084 (New): Upgrading an EFI system installed to ZFS mirror does not upgrade EFI ...https://redmine.pfsense.org/issues/150842023-12-11T16:56:18ZJim Pingle
<p>When an EFI system installed to a ZFS mirror is upgraded, the EFI loader is only updated on the first disk of the mirror (<code>/dev/gpt/efiboot0</code>).</p>
<p>If the system has EFI filesystems on the additional disks, they are not touched during upgrade.</p>
<p>Can be worked around by manually mounting the additional EFI partitions and copying the files.</p>
<p>For example, to update the loader on the second disk:</p>
<pre><code class="shell syntaxhl"><span class="c"># mount -t msdosfs /dev/gpt/efiboot1 /mnt/</span>
<span class="c"># cp -R /boot/efi/ /mnt</span>
<span class="c"># umount /mnt</span>
</code></pre>
<p>Note that systems may or may not actually have a proper EFI filesystem on the additional disks. See <a class="issue tracker-1 status-1 priority-5 priority-high4" title="Bug: Installing to ZFS mirror does not format or populate EFI partition on additional disks (New)" href="https://redmine.pfsense.org/issues/15083">#15083</a></p>
<p>Marked as Plus 24.03/CE 2.8.0 but if it can be fixed in the pfSense-boot package the fix could be picked back to 23.09.1/2.7.2.</p> pfSense - Bug #15082 (New): Upgrade fails due to unmounted EFI filesystemhttps://redmine.pfsense.org/issues/150822023-12-11T14:10:15ZJim Pingle
<p>This may be related to <a class="issue tracker-1 status-1 priority-4 priority-default" title="Bug: Upgrade fails due to undersized EFI filesystem (New)" href="https://redmine.pfsense.org/issues/15081">#15081</a> but it's not definite.</p>
<p>Some upgrades have failed in pfSense-boot if the EFI partition is not manually mounted first.</p>
<p>There are several reports of this where simply manually mounting the EFI partition before starting the upgrade allows it to complete. See <a class="external" href="https://www.reddit.com/r/PFSENSE/comments/18d887u/netgate_releases_pfsense_plus_software_version/kcjcktm/">https://www.reddit.com/r/PFSENSE/comments/18d887u/netgate_releases_pfsense_plus_software_version/kcjcktm/</a> for example.</p>
<p>Marked as Plus 24.03/CE 2.8.0 but if it can be fixed in the pfSense-boot package the fix could be picked back to 23.09.1/2.7.2.</p> pfSense - Bug #15081 (New): Upgrade fails due to undersized EFI filesystemhttps://redmine.pfsense.org/issues/150812023-12-11T14:01:54ZJim Pingle
<p>Some installations as recent as Plus 22.01 / CE 2.6.0 have EFI partitions that were created and/or populated by the old EFIFAT image method. This means that while the EFI <em>partition</em> is 200M, the EFI <em>filesystem</em> is only around 700KB. As a result, these installations are unable to upgrade to recent versions successfully as the loader cannot be updated.</p>
<p>This can be worked around by reformatting the EFI partition directly and copying the appropriate files back into place, as described in this forum post: <a class="external" href="https://forum.netgate.com/post/1140955">https://forum.netgate.com/post/1140955</a></p>
<pre><code class="shell syntaxhl"><span class="c"># mkdir -p /boot/efi</span>
<span class="c"># mount_msdosfs /dev/msdosfs/EFISYS /boot/efi</span>
<span class="c"># mkdir -p /tmp/efitmp</span>
<span class="c"># cp -Rp /boot/efi/* /tmp/efitmp</span>
<span class="c"># umount /boot/efi</span>
<span class="c"># newfs_msdos -F 32 -c 1 -L EFISYS /dev/msdosfs/EFISYS</span>
<span class="c"># mount_msdosfs /dev/msdosfs/EFISYS /boot/efi</span>
<span class="c"># cp -Rp /tmp/efitmp/* /boot/efi/</span>
</code></pre>
<p>There are some potential complications there. For example, the EFI filesystem may not be labeled that way, it could be <code>/dev/gpt/EFISYS</code> or it may have no label at all.</p>
<p>Marked as Plus 24.03/CE 2.8.0 but if it can be fixed in the pfSense-boot package the fix could be picked back to 23.09.1/2.7.2.</p> pfSense - Bug #14983 (New): Upgrade can fail when unexpected EFI partitions are present.https://redmine.pfsense.org/issues/149832023-11-14T15:49:22ZSteve Wheeler
<p>pfSense-upgrade can fail when the pfSense-boot post install script tries to update the bot loader if the first EFI partition is not on the boot drive.</p>
<p>For example if the main boot drive is not installed as UEFI and the installation media is still present. The script tries and fails to update the wrong drive aborting the upgrade:</p>
<pre>
Number of packages to be reinstalled: 1
[1/1] Reinstalling pfSense-boot-23.09...
[1/1] Extracting pfSense-boot-23.09: .......... done
mount_msdosfs: /dev/msdosfs/EFISYS: Read-only file system
pkg-static: POST-INSTALL script failed
failed.
__RC=1 __REBOOT_AFTER=10
</pre> pfSense Packages - Bug #14676 (Confirmed): Listening Port option in the Tailscale configurator is...https://redmine.pfsense.org/issues/146762023-08-10T02:54:52ZDavid G
<p>The tailscaled process starts and listens on a random port, instead of the one specified. This causes things like direct tunnels between tailscale node to not work (WAN rule), thus causing all traffic to be relayed when the other device is behind double NAT or other hard NAT types. If I go and see what port is actually being used and adjust me WAN rule, suddenly direct connections are all established.</p>
<p>How to reproduce:<br />1. Set a listening port<br />2. Start the tailscale service<br />3. View what the actual port is being listened on by executing "sockstat -l"</p> pfSense Packages - Bug #14556 (New): Tailscale dropping routes from FIBhttps://redmine.pfsense.org/issues/145562023-07-07T14:28:17ZChris Linstruth
<p>Installation has several tailscale nodes. The problematic node is a 6100. Some of the other nodes are 2100s.</p>
<p>At some point in the past, it started malfunctioning on one of the nodes whenever specific types of changes are made.</p>
<ul>
<li>Add or remove a node with routed subnets, all routes drop. Can successfully add/remove nodes without routes. This is on the tailscale machine config.</li>
<li>Simply marking a route as active or inactive (tailscale edit route settings) will also trigger it.</li>
</ul>
<p>It occurs occasionally without any changes being made.<br />Bounce the tailscale process on that 6100 node and they return.<br />The routes just drop from the kernel FIB.<br />Only on the one node.</p>
<p>There is essentially nothing logged (DEBUG logging level) regarding the actions of the tailscale routing protocol. Nor is there anything of troubleshooting value on the tailscale cloud site.</p>
<p>All IPv4 tailscale routes drop including host routes. It is probably noteworthy that the IPv6 /48 is still in the table and tailscaled is still running.</p>
<p>Another possibly interesting note is the routes advertised by the 6100 that drops the routes remain advertised into the tailnet and present on the other nodes.</p>
<p>The nodes are still showing as “idle” so tailscale is still “up.”</p>
<p>Attempted to duplicate this by adding a tailnet to 4 pfSense nodes with routes and two devices without routes. It could not be made to misbehave.</p> pfSense Packages - Bug #13405 (New): Wireguard: The webgui becomes excessively slow to respond wi...https://redmine.pfsense.org/issues/134052022-08-11T09:12:04ZSteve Wheeler
<p>Webgui pages that include data from Wireguard can become very slow to respond with a large number of elements present (peers/tunnels).</p>
<p>Code that parses the output of 'wg show all dump' creates a delay.</p>
<p>For example we see delays of ~10s opening the Wireguard status page with 80 peers defined on a 6100.</p>
<p>This affects the peers, tunnels and status pages. And to a lesser extent the dashboard when the Wireguard widget is disaplayed.</p> pfSense Packages - Bug #13095 (Feedback): Snort VRT change in Shared Object Rules path name resul...https://redmine.pfsense.org/issues/130952022-04-25T09:43:25ZBill Meeks
<p>Apparently the Snort Vulnerability Research Team recently altered part of the path name inside the Snort Rules Update archive. This results in failure of the Snort package code to properly extract and copy the Shared Object (SO) rules when performing the periodic rules update. A portion of the long directory path in the archive was changed from "x86_64" to "x86-64" (replaced the underscore with a dash).</p> pfSense Packages - Bug #12979 (Pull Request Review): Snort Rules Update Process Using Deprecated ...https://redmine.pfsense.org/issues/129792022-03-23T14:23:01ZBill Meeks
<p>Beginning around the first of March 2022, the Snort rules update package from the Snort VRT changed the subdirectory name for the precompiled Shared Object (SO) rules, in the archive, from "FreeBSD-12" to "FreeBSD-13". The Snort rules update code in the GUI parses the current FreeBSD version from the operating system, so since pfSense is still on FreeBSD 12.3, this results in the rules update code searching for a non-existent "FreeBSD-12" subdirectory in the archive when unpacking it. Until such time as pfSense moves to FreeBSD-13, this logic needs to be changed and the subdirectory name hard-coded to "FreeBSD-13".</p> pfSense - Bug #12715 (New): Long system startup time when LDAP is configured and unavailable duri...https://redmine.pfsense.org/issues/127152022-01-21T15:36:42ZChristian McDonaldcmcdonald@netgate.com
<ol>
<li>Currently if LDAP is unavailable at system startup, several LDAP queries have to timeout before the system will proceed with startup. There is no recycling of connections, so <em>n</em> LDAP queries requires <em>n</em> separate connections, and thus <em>n</em> separate timeouts. This results in a hang at startup that is several minutes long in some cases, probably dependent on the number of LDAP calls that are required (e.g. <em>n</em> * LDAP_timeout).</li>
<li>If LDAP is unavailable during system startup, the system will appear to hang at "Synchronizing user settings..." </li>
<li>This is unavoidable if LDAP connectivity relies on a VPN (e.g. IPsec, WireGuard, etc.), FRR for dynamic routes, etc...these services are started later in the startup process.</li>
<li>We should implement some sort of global state that will prevent subsequent LDAP queries if one times out during system startup, as subsequent attempts are likely to fail as well.</li>
</ol>
<p>Related to <a class="external" href="https://redmine.pfsense.org/issues/11644">https://redmine.pfsense.org/issues/11644</a></p> pfSense Packages - Bug #12608 (New): WireGuard tunnels monitored by dpinger causing system to sto...https://redmine.pfsense.org/issues/126082021-12-16T15:14:54ZChristian McDonaldcmcdonald@netgate.com
<p>Current workaround is to disable gateway monitoring on WireGuard tunnel gateways.</p>
<p>(I will be noting observations here as I unpack this)</p> pfSense - Bug #6605 (Confirmed): rc.linkup logic issues with actions takenhttps://redmine.pfsense.org/issues/66052016-07-12T19:46:41ZChris Buechlercbuechler@gmail.com
<p>The actions taken by rc.linkup differ depending on whether the interface has a static or no IPv4 and IPv6 IP, and every other case (where either the v4 or v6 type of the interface is dynamic). <br /><a class="external" href="https://github.com/pfsense/pfsense/blob/RELENG_2_3/src/etc/rc.linkup#L70">https://github.com/pfsense/pfsense/blob/RELENG_2_3/src/etc/rc.linkup#L70</a></p>
<p>While there are no doubt some edge case reasons for that being the way it is, it's not sensible logic in general. The actions taken should be much closer to the same between them.</p>
<p>The only known problem this causes is with CARP. The interface_bring_down function removes CARP VIPs from the interface. If you have a static v4 and track6 LAN, this makes CARP get into dual master on WAN when the LAN loses link. What should happen is the CARP IP stays in INIT, which increments net.inet.carp.demotion by 240, which makes the secondary take over for the WAN VIPs. What actually happens is it increments demotion by 240, fails over to the secondary, then the VIP is deleted from the LAN on primary so demotion gets a +240 on the primary because the VIP is gone, and the primary takes back over master. Then you have dual master on WAN.</p>
<p>interface_bring_down should never be run on an interface where a CARP VIP resides to avoid this situation. It's questionable whether it's ever actually necessary or desirable when losing link on a NIC.</p>
<p>This is a potentially touchy area for regressions, so it'll need a good deal of review, testing and time to run in snapshots.</p>