At some point you may be asked to complete an vulnerability audit of your network estate. In the past this would have required searching a near endless list of Cisco vulnerabilities and then cross referencing these against an even longer list of BugIds. Thankfully Cisco released the PSIRT API which can ease the pain of this process.
My previous post on this topic required an APIC-EM instance to act as the source of information for the network inventory. This time round I will use something a little more static, a CSV:


The first step is to obtain the OAuth token which will allow us to use the API:


def get_api_token(url):
    response =, verify=False, proxies=PROXIES, data={"grant_type": "client_credentials"},
                             headers={"Content-Type": "application/x-www-form-urlencoded"},
                             params={"client_id": CLIENT_ID, "client_secret": CLIENT_PASS})

    if response is not None:
        return json.loads(response.text)["access_token"]

    return None

You will need to sign up to to obtain your own unique client ID and password.

With the token and the IOS version number a PSIRT REST GET query is created and the from the returned JSON response we pluck the key/value pairs we are interested in and return it as a dictionary:

def get_advisories_by_release(token, platform, ver):
    platform_dict = {"platform": platform, "release": ver, "advisories": []}
    response = requests.get(API_GET_ADVISORIES.format(ver), verify=False, proxies=PROXIES,
    headers={"Authorization": "Bearer {0}".format(token), "Accept": "application/json"})

    if response.status_code == 200:
        platform_dict["advisories"] = build_dictionary_relevant_advisories(json.loads(response.text)["advisories"])
        return platform_dict

    return {"platform": platform, "release": ver, "advisories": [], "state": "ERROR", "detail": response.status_code}

def build_dictionary_relevant_advisories(advisories):
    adv_list = []
    for adv in advisories:
        adv_dict = dict()
        adv_dict["advisory_id"] = adv["advisoryId"] if "advisoryId" in adv else "Unknown"
        adv_dict["advisory_title"] = adv["advisoryTitle"] if "advisoryTitle" in adv else "Unknown"
        adv_dict["bug_ids"] = adv["bugIDs"] if "bugIDs" in adv else "Unknown"
        adv_dict["first_fixed"] = adv["firstFixed"] if "firstFixed" in adv else "Unknown"

return adv_list

Iterate through the CSV file one line at a time the returned dictionaries are stored in a list:

def load_csv(input_csv, token):
    big_list = []
    with open(input_csv, "r") as file:
        for device_row in csv.DictReader(file):
            big_list.append(get_advisories_by_release(token, device_row["platform"], device_row["ios_version"]))

    return big_list

The next step is to iterate through the list and print the output in a most readable format. We could use json.dumps(, indent=2) but since the dictionary has the key/value pair ‘advisories’ which itself is a list of dictionaries the resulting output is not that readable. The following method takes information from both the CSV and the PSIRT API to provide information for each platform/ release pair:

def print_advisories(source_dict, detail=True):
    for item in source_dict:
        print("Platform: {0}, Current release: {1}".format(item["platform"], item["release"]))
        print(" {0} advisories".format(len(item["advisories"])))
        if len(item["advisories"]) == 0:
            message = "ERROR encountered during lookup: {0}".format(item["detail"]) if item["state"] == "ERROR" \
                else "None found"

            print(" {0}".format(message))
            detail_t = ""
            fixed_releases = []
            for adv in item["advisories"]:
                if adv is not None:
                    detail_t = detail_t + DETAIL_TEXT.format(adv["advisory_id"], adv["advisory_title"],
                               ", ".join(adv["first_fixed"]), ", ".join(adv["bug_ids"]))
                    fixed_releases = fixed_releases + adv["first_fixed"]

            print(" Minimum suggested release: {0}".format(sorted(fixed_releases)[len(fixed_releases)-1]))
            if detail:

In the event that an error was encountered during the PSIRT API lookup process, the error message is displayed under the platform/ release pair, to notify the use that it occurred, otherwise the advisories list would be empty giving the impression that no vulnerabilities were present.
For each platform/ release the first-fixed values are add to a list, then sorted and the highest value picked to give the Recommended Release value. The output looks like:

Platform: 3560, Current release: 12.2(50)SE3
  32 advisories
  Minimum suggested release: 15.0(2a)SE9
  ID cisco-sa-20180926-cmp -- Cisco IOS and IOS XE Software Cluster Management Protocol Denial of Service Vulnerability
    First fixed: 12.2(55)SE13
    Bug IDs: CSCvg48576
  ID cisco-sa-20180926-tacplus -- Cisco IOS and IOS XE Software TACACS+ Client Denial of Service Vulnerability
    First fixed: 12.2(55)SE13
    Bug IDs: CSCux66796
  ID cisco-sa-20180926-vtp -- Cisco IOS and IOS XE Software VLAN Trunking Protocol Denial of Service Vulnerability
    First fixed: 12.2(55)SE13
    Bug IDs: CSCvd37163

As much as I like text files, the data is much more useful if you can tabulate it. Lets first turn it into a dictionary, where each key/value pair can represent a line.
This next method will first take our PSIRT generated list and for each dictionary contained within it, will take the “platform” value and use it as a key for value and create a boolean dictionary, “platforms”. The data structure will look like:

"3560": False,
"3750-X": False

Now we take the PSIRT list, and for each element, we extract the advisories and add it to csv_dict, but only if we haven’t done so already. This way we end up with a list of unique advisories. Against each advisory we store a copy of the “platforms” dictionary from earlier, and every time a platform is recorded as having this advisory we change the boolean for True. The data structure (the section we are interested in will look like:

  "advisory_id": "cisco-sa-20180926-cmp"
  "affected_platforms": {
                         "3560": True,
                         "3750-X": False

What you will end up with is a list of dictionaries, where each dictionary is a unique advisory ID with a dictionary of affected platforms:

def build_csv_dict(source_list):
    csv_dict = dict()
    platforms = dict()

    for p in source_list:
        platforms[p["platform"]] = False

    for item in source_list:
        for adv in item["advisories"]:
            if adv is not None:
                if adv["advisory_id"] not in csv_dict:
                    csv_dict[adv["advisory_id"]] = adv
                    csv_dict[adv["advisory_id"]]["affected_platforms"] = platforms.copy()

                csv_dict[adv["advisory_id"]]["affected_platforms"][item["platform"]] = True

    print(json.dumps(csv_dict, indent=2))
    return csv_dict, list(platforms.keys())

Next we start the process of writing each dictionary in the list as a line in a CSV file:

def write_to_csv(source_dict, platform_list):
    headernames = ["advisory_id", "advisory_title", "first_fixed", "bug_ids"] + platform_list

    with open("vuln_checker_output" + ".csv", "w", newline="") as csvfile:
        csvwriter = csv.writer(csvfile, delimiter=",")

        for adv in source_dict:
            row = [source_dict[adv]["advisory_id"], source_dict[adv]["advisory_title"],
                  "/ ".join(source_dict[adv]["first_fixed"]), "/ ".join(source_dict[adv]["bug_ids"])]
            for p in platform_list:


Take the CSV sling it into your favourite spreadsheet app, add some formatting and the task is nearly complete:


It is worth pointing out that although a device is marked as True for being affected by a vulnerability you will have to take the manual step of cross-referencing the BugID against your running configs to determine if it really is vulnerable.

Full source can be found here:

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at

Up ↑

%d bloggers like this: