python BeautifulSoup 解析表

2025-01-15 08:45:00
admin
原创
112
摘要:问题描述:我正在学习 Pythonrequests和 BeautifulSoup。作为练习,我选择编写一个快速的纽约停车罚单解析器。我得到的 HTML 响应非常丑陋。我需要抓取lineItemsTable并解析所有罚单。您可以通过此处重现该页面:https://paydirect.link2gov.com/N...

问题描述:

我正在学习 Pythonrequests和 BeautifulSoup。作为练习,我选择编写一个快速的纽约停车罚单解析器。我得到的 HTML 响应非常丑陋。我需要抓取lineItemsTable并解析所有罚单。

您可以通过此处重现该页面:https://paydirect.link2gov.com/NYCParking-Plate/ItemSearch并输入NY车牌号T630134C

soup = BeautifulSoup(plateRequest.text)
#print(soup.prettify())
#print soup.find_all('tr')

table = soup.find("table", { "class" : "lineItemsTable" })
for row in table.findAll("tr"):
    cells = row.findAll("td")
    print cells

有人能帮帮我吗?简单地搜索 alltr也无济于事。


解决方案 1:

干得好:

data = []
table = soup.find('table', attrs={'class':'lineItemsTable'})
table_body = table.find('tbody')

rows = table_body.find_all('tr')
for row in rows:
    cols = row.find_all('td')
    cols = [ele.text.strip() for ele in cols]
    data.append([ele for ele in cols if ele]) # Get rid of empty values

这将为您提供:

[ [u'1359711259', u'SRF', u'08/05/2013', u'5310 4 AVE', u'K', u'19', u'125.00', u'$'], 
  [u'7086775850', u'PAS', u'12/14/2013', u'3908 6th Ave', u'K', u'40', u'125.00', u'$'], 
  [u'7355010165', u'OMT', u'12/14/2013', u'3908 6th Ave', u'K', u'40', u'145.00', u'$'], 
  [u'4002488755', u'OMT', u'02/12/2014', u'NB 1ST AVE @ E 23RD ST', u'5', u'115.00', u'$'], 
  [u'7913806837', u'OMT', u'03/03/2014', u'5015 4th Ave', u'K', u'46', u'115.00', u'$'], 
  [u'5080015366', u'OMT', u'03/10/2014', u'EB 65TH ST @ 16TH AV E', u'7', u'50.00', u'$'], 
  [u'7208770670', u'OMT', u'04/08/2014', u'333 15th St', u'K', u'70', u'65.00', u'$'], 
  [u'$0.00


Payment Amount:']
]

需要注意以下几点:

  • 上面输出的最后一行,付款金额不是表格的一部分,但表格就是这样布局的。您可以通过检查列表长度是否小于 7 来将其过滤掉。

  • 由于它是一个输入文本框,因此必须单独处理每一行的最后一列。

解决方案 2:

更新答案

如果程序员只对解析网页上的表格感兴趣,他们可以使用 pandas 方法pandas.read_html

假设我们要从网站中提取 GDP 数据表: https: //worldpopulationreview.com/countries/countries-by-gdp/#worldCountries

然后下面的代码可以完美地完成这项工作(不需要beautifulsoup和fancy html):

仅使用 Pandas

# sometimes we can directly read from the website
url = "https://en.wikipedia.org/wiki/AFI%27s_100_Years...100_Movies#:~:text=%20%20%20%20Film%20%20%20,%20%204%20%2025%20more%20rows%20"
df = pd.read_html(url)
df.head()

使用 pandas 和请求(更常见的情况)

from io import StringIO

# if pd.read_html does not work, we can use pd.read_html using requests.
import pandas as pd
import requests

url = "https://worldpopulationreview.com/countries/countries-by-gdp/#worldCountries"

r = requests.get(url)
df_list = pd.read_html(StringIO(r.text)) # this parses all the tables in webpages to a list
df = df_list[0]
df.head()

必需模块

pip install lxml
pip install requests
pip install pandas

输出

表格前五行来自网站

解决方案 3:

已解决,这就是你解析其 html 结果的方式:

table = soup.find("table", { "class" : "lineItemsTable" })
for row in table.findAll("tr"):
    cells = row.findAll("td")
    if len(cells) == 9:
        summons = cells[1].find(text=True)
        plateType = cells[2].find(text=True)
        vDate = cells[3].find(text=True)
        location = cells[4].find(text=True)
        borough = cells[5].find(text=True)
        vCode = cells[6].find(text=True)
        amount = cells[7].find(text=True)
        print amount

解决方案 4:

这是一个通用的工作示例<table>。(问题链接已损坏

从这里提取各国按 GDP(国内生产总值)划分的表格。

htmltable = soup.find('table', { 'class' : 'table table-striped' })
# where the dictionary specify unique attributes for the 'table' tag

tableDataText函数解析以标签开头的 HTML 片段,<table> 后面跟着多个<tr>(表格行)和内部<td>(表格数据)标签。它返回带有内部列的行列表。<th>第一行仅接受一个(表格标题/数据)。

def tableDataText(table):       
    rows = []
    trs = table.find_all('tr')
    headerow = [td.get_text(strip=True) for td in trs[0].find_all('th')] # header row
    if headerow: # if there is a header row include first
        rows.append(headerow)
        trs = trs[1:]
    for tr in trs: # for every table row
        rows.append([td.get_text(strip=True) for td in tr.find_all('td')]) # data row
    return rows

使用它我们得到(前两行)。

list_table = tableDataText(htmltable)
list_table[:2]

[['Rank',
  'Name',
  "GDP (IMF '19)",
  "GDP (UN '16)",
  'GDP Per Capita',
  '2019 Population'],
 ['1',
  'United States',
  '21.41 trillion',
  '18.62 trillion',
  '$65,064',
  '329,064,917']]

这可以很容易地转换pandas.DataFrame为更先进的工具。

import pandas as pd
dftable = pd.DataFrame(list_table[1:], columns=list_table[0])
dftable.head(4)

pandas DataFrame html 表格输出

解决方案 5:

我对 MediaWiki 版本显示中的表格很感兴趣,例如https://en.wikipedia.org/wiki/Special:Version

单元测试

from unittest import TestCase
import pprint

class TestHtmlTables(TestCase):
    '''
    test the HTML Tables parsere
    '''
    def testHtmlTables(self):
        url="https://en.wikipedia.org/wiki/Special:Version"
        html_table=HtmlTable(url)
        tables=html_table.get_tables("h2")
        pp = pprint.PrettyPrinter(indent=2)
        debug=True
        if debug:
            pp.pprint(tables)
        pass

Html表格.py

'''
Created on 2022-10-25

@author: wf
'''
from bs4 import BeautifulSoup
from urllib.request import Request, urlopen

class HtmlTable(object):
    '''
    HtmlTable
    '''

    def __init__(self, url):
        '''
        Constructor
        '''
        req = Request(url, headers={'User-Agent': 'Mozilla/5.0'})
        self.html_page = urlopen(req).read()

        self.soup = BeautifulSoup(self.html_page, 'html.parser')
        
    def get_tables(self,header_tag:str=None)->dict:
        """
        get all tables from my soup as a list of list of dicts
        
        Args:
            header_tag(str): if set search the table name from the given header tag
        
        Return:
            dict: the list of list of dicts for all tables
            
        """
        tables = {}
        for i,table in  enumerate(self.soup.find_all("table")):
            fields = []
            table_data=[]
            for tr in table.find_all('tr', recursive=True):
                for th in tr.find_all('th', recursive=True):
                    fields.append(th.text)
            for tr in table.find_all('tr', recursive=True):
                record= {}
                for i, td in enumerate(tr.find_all('td', recursive=True)):
                    record[fields[i]] = td.text
                if record:
                    table_data.append(record)
            if header_tag is not None:
                header=table.find_previous_sibling(header_tag)
                table_name=header.text
            else:
                table_name=f"table{i}"
            tables[table_name]=(table_data)
        return tables

结果

Finding files... done.
Importing test modules ... done.
Tests to run: ['TestHtmlTables.testHtmlTables']

testHtmlTables (tests.test_html_table.TestHtmlTables) ... Starting test testHtmlTables, debug=False ...
{ 'Entry point URLs': [ {'Entry point': 'Article path', 'URL': '/wiki/$1'},
                        {'Entry point': 'Script path', 'URL': '/w'},
                        {'Entry point': 'index.php', 'URL': '/w/index.php'},
                        {'Entry point': 'api.php', 'URL': '/w/api.php'},
                        {'Entry point': 'rest.php', 'URL': '/w/rest.php'}],
  'Installed extensions': [ { 'Description': 'Brad Jorsch',
                              'Extension': '1.0 (b9a7bff) 01:45, 9 October '
                                           '2022',
                              'License': 'Get a summary of logged API feature '
                                         'usages for a user agent',
                              'Special pages': 'ApiFeatureUsage',
                              'Version': 'GPL-2.0-or-later'},
                            { 'Description': 'Brion Vibber, Kunal Mehta, Sam '
                                             'Reed, Aaron Schulz, Brad Jorsch, '
                                             'Umherirrender, Marius Hoch, '
                                             'Andrew Garrett, Chris Steipp, '
                                             'Tim Starling, Gergő Tisza, '
                                             'Alexandre Emsenhuber, Victor '
                                             'Vasiliev, Glaisher, DannyS712, '
                                             'Peter Gehres, Bryan Davis, James '
                                             'D. Forrester, Taavi Väänänen and '
                                             'Alexander Vorwerk',
                              'Extension': '– (df2982e) 23:10, 13 October 2022',
                              'License': 'Merge account across wikis of the '
                                         'Wikimedia Foundation',
                              'Special pages': 'CentralAuth',
                              'Version': 'GPL-2.0-or-later'},
                            { 'Description': 'Tim Starling and Aaron Schulz',
                              'Extension': '2.5 (648cfe0) 06:20, 17 October '
                                           '2022',
                              'License': 'Grants users with the appropriate '
                                         'permission the ability to check '
                                         "users' IP addresses and other "
                                         'information',
                              'Special pages': 'CheckUser',
                              'Version': 'GPL-2.0-or-later'},
                            { 'Description': 'Ævar Arnfjörð Bjarmason and '
                                             'James D. Forrester',
                              'Extension': '– (2cf4aaa) 06:41, 14 October 2022',
                              'License': 'Adds a citation special page and '
                                         'toolbox link',
                              'Special pages': 'CiteThisPage',
                              'Version': 'GPL-2.0-or-later'},
                            { 'Description': 'PediaPress GmbH, Siebrand '
                                             'Mazeland and Marcin Cieślak',
                              'Extension': '1.8.0 (324e738) 06:20, 17 October '
                                           '2022',
                              'License': 'Create books',
                              'Special pages': 'Collection',
                              'Version': 'GPL-2.0-or-later'},
                            { 'Description': 'Amir Aharoni, David Chan, Joel '
                                             'Sahleen, Kartik Mistry, Niklas '
                                             'Laxström, Pau Giner, Petar '
                                             'Petković, Runa Bhattacharjee, '
                                             'Santhosh Thottingal, Siebrand '
                                             'Mazeland, Sucheta Ghoshal and '
                                             'others',
                              'Extension': '– (56fe095) 11:56, 17 October 2022',
                              'License': 'Makes it easy to translate content '
                                         'pages',
                              'Special pages': 'ContentTranslation',
                              'Version': 'GPL-2.0-or-later'},
                            { 'Description': 'Andrew Garrett, Ryan Kaldari, '
                                             'Benny Situ, Luke Welling, Kunal '
                                             'Mehta, Moriel Schottlender, Jon '
                                             'Robson and Roan Kattouw',
                              'Extension': '– (cd01f9b) 06:21, 17 October 2022',
                              'License': 'System for notifying users about '
                                         'events and messages',
                              'Special pages': 'Echo',
                              'Version': 'MIT'},
 ..
  'Installed libraries': [ { 'Authors': 'Benjamin Eberlei and Richard Quadling',
                             'Description': 'Thin assertion library for input '
                                            'validation in business models.',
                             'Library': 'beberlei/assert',
                             'License': 'BSD-2-Clause',
                             'Version': '3.3.2'},
                           { 'Authors': '',
                             'Description': 'Arbitrary-precision arithmetic '
                                            'library',
                             'Library': 'brick/math',
                             'License': 'MIT',
                             'Version': '0.8.17'},
                           { 'Authors': 'Christian Riesen',
                             'Description': 'Base32 encoder/decoder according '
                                            'to RFC 4648',
                             'Library': 'christian-riesen/base32',
                             'License': 'MIT',
                             'Version': '1.6.0'},
 ...
                       { 'Authors': 'Readers Web Team, Trevor Parscal, Roan '
                                    'Kattouw, Alex Hollender, Bernard Wang, '
                                    'Clare Ming, Jan Drewniak, Jon Robson, '
                                    'Nick Ray, Sam Smith, Stephen Niedzielski '
                                    'and Volker E.',
                         'Description': 'Provides 2 Vector skins:
'
                                        '
'
                                        '2011 - The Modern version of MonoBook '
                                        'with fresh look and many usability '
                                        'improvements.
'
                                        '2022 - The Vector built as part of '
                                        'the WMF mw:Desktop Improvements '
                                        'project.',
                         'License': 'GPL-2.0-or-later',
                         'Skin': 'Vector',
                         'Version': '1.0.0 (93f11b3) 20:24, 17 October 2022'}],
  'Installed software': [ { 'Product': 'MediaWiki',
                            'Version': '1.40.0-wmf.6 (bb4c5db)17:39, 17 '
                                       'October 2022'},
                          {'Product': 'PHP', 'Version': '7.4.30 (fpm-fcgi)'},
                          { 'Product': 'MariaDB',
                            'Version': '10.4.25-MariaDB-log'},
                          {'Product': 'ICU', 'Version': '63.1'},
                          {'Product': 'Pygments', 'Version': '2.10.0'},
                          {'Product': 'LilyPond', 'Version': '2.22.0'},
                          {'Product': 'Elasticsearch', 'Version': '7.10.2'},
                          {'Product': 'LuaSandbox', 'Version': '4.0.2'},
                          {'Product': 'Lua', 'Version': '5.1.5'}]}
test testHtmlTables, debug=False took   1.2 s
ok

----------------------------------------------------------------------
Ran 1 test in 1.204s

OK

解决方案 6:

from behave import *
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as ec
import pandas as pd
import requests
from bs4 import BeautifulSoup
from tabulate import tabulate

class readTableDataFromDB: 
    def LookupValueFromColumnSingleKey(context, tablexpath, rowName, columnName):
        print("element present readData From Table")
        element = context.driver.find_elements_by_xpath(tablexpath+"/descendant::th")
        indexrow = 1
        indexcolumn = 1
        for values in element:
            valuepresent = values.text
            print("text present here::"+valuepresent+"rowName::"+rowName)
            if valuepresent.find(columnName) != -1:
                 print("current row"+str(indexrow) +"value"+valuepresent)
                 break
            else:
                 indexrow = indexrow+1    

        indexvalue = context.driver.find_elements_by_xpath(
            tablexpath+"/descendant::tr/td[1]")
        for valuescolumn in indexvalue:
            valuepresentcolumn = valuescolumn.text
            print("Team text present here::" +
                  valuepresentcolumn+"columnName::"+rowName)
            print(indexcolumn) 
            if valuepresentcolumn.find(rowName) != -1:
                print("current column"+str(indexcolumn) +
                      "value"+valuepresentcolumn)
                break
            else:
                indexcolumn = indexcolumn+1

        print("index column"+str(indexcolumn))
        print(tablexpath +"//descendant::tr["+str(indexcolumn)+"]/td["+str(indexrow)+"]")
        #lookupelement = context.driver.find_element_by_xpath(tablexpath +"//descendant::tr["+str(indexcolumn)+"]/td["+str(indexrow)+"]")
        #print(lookupelement.text)
        return context.driver.find_elements_by_xpath(tablexpath+"//descendant::tr["+str(indexcolumn)+"]/td["+str(indexrow)+"]")

    def LookupValueFromColumnTwoKeyssss(context, tablexpath, rowName, columnName, columnName1):
        print("element present readData From Table")
        element = context.driver.find_elements_by_xpath(
            tablexpath+"/descendant::th")
        indexrow = 1
        indexcolumn = 1
        indexcolumn1 = 1
        for values in element:
            valuepresent = values.text
            print("text present here::"+valuepresent)
            indexrow = indexrow+1
            if valuepresent == columnName:
                print("current row value"+str(indexrow)+"value"+valuepresent)
                break

        for values in element:
            valuepresent = values.text
            print("text present here::"+valuepresent)
            indexrow = indexrow+1
            if valuepresent.find(columnName1) != -1:
                print("current row value"+str(indexrow)+"value"+valuepresent)
                break

        indexvalue = context.driver.find_elements_by_xpath(
            tablexpath+"/descendant::tr/td[1]")
        for valuescolumn in indexvalue:
            valuepresentcolumn = valuescolumn.text
            print("Team text present here::"+valuepresentcolumn)
            print(indexcolumn)
            indexcolumn = indexcolumn+1
            if valuepresent.find(rowName) != -1:
                print("current column"+str(indexcolumn) +
                      "value"+valuepresentcolumn)
                break
        print("indexrow"+str(indexrow))
        print("index column"+str(indexcolumn))
        lookupelement = context.driver.find_element_by_xpath(
            tablexpath+"//descendant::tr["+str(indexcolumn)+"]/td["+str(indexrow)+"]")
        print(tablexpath +
              "//descendant::tr["+str(indexcolumn)+"]/td["+str(indexrow)+"]")
        print(lookupelement.text)
        return context.driver.find_element_by_xpath(tablexpath+"//descendant::tr["+str(indexrow)+"]/td["+str(indexcolumn)+"]")
相关推荐
  政府信创国产化的10大政策解读一、信创国产化的背景与意义信创国产化,即信息技术应用创新国产化,是当前中国信息技术领域的一个重要发展方向。其核心在于通过自主研发和创新,实现信息技术应用的自主可控,减少对外部技术的依赖,并规避潜在的技术制裁和风险。随着全球信息技术竞争的加剧,以及某些国家对中国在科技领域的打压,信创国产化显...
工程项目管理   1565  
  为什么项目管理通常仍然耗时且低效?您是否还在反复更新电子表格、淹没在便利贴中并参加每周更新会议?这确实是耗费时间和精力。借助软件工具的帮助,您可以一目了然地全面了解您的项目。如今,国内外有足够多优秀的项目管理软件可以帮助您掌控每个项目。什么是项目管理软件?项目管理软件是广泛行业用于项目规划、资源分配和调度的软件。它使项...
项目管理软件   1354  
  信创国产芯片作为信息技术创新的核心领域,对于推动国家自主可控生态建设具有至关重要的意义。在全球科技竞争日益激烈的背景下,实现信息技术的自主可控,摆脱对国外技术的依赖,已成为保障国家信息安全和产业可持续发展的关键。国产芯片作为信创产业的基石,其发展水平直接影响着整个信创生态的构建与完善。通过不断提升国产芯片的技术实力、产...
国产信创系统   21  
  信创生态建设旨在实现信息技术领域的自主创新和安全可控,涵盖了从硬件到软件的全产业链。随着数字化转型的加速,信创生态建设的重要性日益凸显,它不仅关乎国家的信息安全,更是推动产业升级和经济高质量发展的关键力量。然而,在推进信创生态建设的过程中,面临着诸多复杂且严峻的挑战,需要深入剖析并寻找切实可行的解决方案。技术创新难题技...
信创操作系统   27  
  信创产业作为国家信息技术创新发展的重要领域,对于保障国家信息安全、推动产业升级具有关键意义。而国产芯片作为信创产业的核心基石,其研发进展备受关注。在信创国产芯片的研发征程中,面临着诸多复杂且艰巨的难点,这些难点犹如一道道关卡,阻碍着国产芯片的快速发展。然而,科研人员和相关企业并未退缩,积极探索并提出了一系列切实可行的解...
国产化替代产品目录   28  
热门文章
项目管理软件有哪些?
云禅道AD
禅道项目管理软件

云端的项目管理软件

尊享禅道项目软件收费版功能

无需维护,随时随地协同办公

内置subversion和git源码管理

每天备份,随时转为私有部署

免费试用