Project

General

Profile

Actions

Feature #3882

open

Add OUI database to the base system, remove dependency on nmap

Added by Jim Pingle about 10 years ago. Updated over 6 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
Web Interface
Target version:
Start date:
09/22/2014
Due date:
% Done:

0%

Estimated time:
Plus Target Version:
Release Notes:

Description

Currently some pages that deal with MAC addresses, such as the ARP table and DHCP leases view, show the manufacturer of the device based on the information in the OUI database if it's available. That file is currently supplied by the nmap package when it is installed, but it would be nice to have in the base system without a package dependency.

The data is available from IEEE;
http://standards.ieee.org/develop/regauth/oui/public.html
http://standards.ieee.org/develop/regauth/oui/oui.txt

Attached is a basic script (much room for improvement) that will download the file and massage it into the same format as the nmap OUI list. It should be sufficient to tie the script into the build process to update periodically rather than having the systems all pull it directly from IEEE or us like bogons. Either that or we could keep a copy checked into the main repo as we do with other imported information (such as usr/local/share/mobile-broadband-provider-info/serviceproviders.xml )


Files

update_oui.sh (1.34 KB) update_oui.sh Jim Pingle, 09/22/2014 09:59 AM
update_oui.sh (1.34 KB) update_oui.sh Jim Pingle, 11/16/2015 08:16 AM
Actions #1

Updated by Jim Pingle about 9 years ago

The main questions are:
  • Do we ship with a file crafted from the IEEE data or build it dynamically on the box?
  • Do we host a copy of the file on our infrastructure (like bogons) or pull straight from IEEE?
  • How often should it be updated? Does it change that often?

I wouldn't want to rely on the end user boxes pulling from IEEE especially since they could move the file or change the format at any time.

It makes more sense to have the script run on one of our servers periodically and keep a formatted version that can be updated like bogons, though if we automate the process, a change on the IEEE side could break the script/update file and hand out bad data without lots of safety belts.

If it doesn't change that often it may be just as good to keep a copy of the file in the source repo and update it with each release/image build. In terms of work/complexity running the script manually and putting the result in the repo is the easiest solution.

Attaching an updated version of the script since the format of the IEEE source changed slightly.

Actions #2

Updated by Kill Bill about 9 years ago

Jim P wrote:

I wouldn't want to rely on the end user boxes pulling from IEEE especially since they could move the file or change the format at any time.

Also, the script is not exactly a speed champ - would take minutes on Alix boxes. (Sure could be improved by rewriting the script, but I don't think it's worth the effort.)

If it doesn't change that often it may be just as good to keep a copy of the file in the source repo and update it with each release/image build. In terms of work/complexity running the script manually and putting the result in the repo is the easiest solution.

Frankly, none of the info there is critical, though it's useful to have that in the GUI. I'd say updating it once per release is just enough :-) (Also, if you ship the script, people could always put it to cron or update manually if desired.)

Actions #3

Updated by Jon Gerdes over 6 years ago

Why not reuse this: https://code.wireshark.org/review/gitweb?p=wireshark.git;a=blob_plain;f=manuf;hb=HEAD the license is GPL2.

If it's good enough for the WS community, its good enough for me. It also means that pfSense falls neatly into line with WS for packet capture exports, which are generally read in WS.

That file could be treated similarly to bogons with a, say, one month pull with a random spread across systems:

These lists: https://primes.utm.edu/lists/small/millions/ get you quite a few primes or generate your own and store locally. Randomly pick (with a nod to how much entropy is available on a given platform) one per install/per upgrade/per download/whatever and keep those in a list. Now use those primes to create cronjobs that require downloads. 1 year in seconds is roughly 31.5M. 1 month is 2.6M etc. Scale the time period and pick appropriate primes to generate a time to put into cron. Once the job is done, generate another appropriate prime and update cron. I don't know if something similar to "at" is available in BSD, which might work better with this scheme. With big enough lists of primes and decent spread of time, this should spread the load nicely both on the sources and the firewalls themselves.

I've bodged around one or two problems using prime numbers in scheduled tasks ...

Actions

Also available in: Atom PDF