Py学习  »  Python

python:从嵌套循环内部将网站从行刮到的数据转换为列

Chris • 5 年前 • 1330 次点击  

我正在尝试将行转换为在嵌套for循环中生成的列。

简而言之就是这样: 值1在行中,属于值1的数据必须作为列 值2在行中,属于值2的数据必须作为列

现在的情况是 所有值都导出为行,之后,值的所有值都导出为行,这将使其无法读取。

问题是要得到价值1,价值2等等…我必须遍历for循环,要获取值1的所有数据,我需要遍历另一个for循环(嵌套循环)。

我收集的所有数据都来自一个网站(抓取)。 我已经包括了imgurl链接到它是如何和应该如何(我的进展到目前为止)。第一个是它是怎样的,第二个是它应该是怎样的。我相信用形象来解释比用我自己的话更容易。 https://imgur.com/a/2LRhQrj

我使用pandas和xlsxwriter来存储到excel。 我设法将所有数据导出到excel中,但似乎无法将每个值的值转换为列。 第一行是时间。这是应该的工作方式。

        #Initialize things before loop
        df = pd.DataFrame()
        ### Time based on hour 00:00, 01:00 etc...
        df_time = pd.DataFrame(columns=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23])

        for listing in soup.find_all('tr'):

            listing.attrs = {}
            #assetTime = listing.find_all("td", {"class": "locked"})
            assetCell = listing.find_all("td", {"class": "assetCell"})
            assetValue = listing.find_all("td", {"class": "assetValue"})


            for data in assetCell:

                array = [data.get_text()]
                df = df.append(pd.DataFrame({
                                        'Fridge name': array,
                                        }))

                for value in assetValue:

                    asset_array = [value.get_text()]
                    df_time = df_time.append(pd.DataFrame({
                                                'Temperature': asset_array
                                                }))
                ### End of assetValue loop
            ### End of assetCell loop

        ### Now we need to save the data to excel
        ### Create a Pandas Excel writer using XlsxWriter as the Engine
        writer = pd.ExcelWriter(filename+'.xlsx', engine='xlsxwriter')

        ### Convert dataframes
        frames = [df, df_time]
        result = pd.concat(frames)

        ### Convert the dataframe to an XlsxWriter Excel object and skip first row for custom header
        result.to_excel(writer, sheet_name='SheetName', startrow=1, header=True)

        ### Get the xlsxwritert workbook and worksheet objects
        workbook = writer.book
        worksheet = writer.sheets['SheetName']

        ### Write the column headers with the defined add_format
        for col_num, value in enumerate(result.columns.values):
            worksheet.write(0, col_num +1, value)

            ### Close Pandas Excel writer and output the Excel file
            writer.save()
Python社区是高质量的Python/Django开发社区
本文地址:http://www.python88.com/topic/43365
 
1330 次点击  
文章 [ 1 ]  |  最新文章 5 年前
Chris
Reply   •   1 楼
Chris    5 年前

经过多次测试,我采用了另一种方法。我不是和熊猫一起去的 制表 将整个数据刮除,然后将整个表结构导出为csv。

from tabulate import tabulate
import csv
import datetime ### Import date function to make the files based on date
import requests
from bs4 import BeautifulSoup



 if (DAY_INTEGER <= 31) and (DAY_INTEGER > 0):

    while True:
        try:
            ### Validate the user input
            form_data = {'UserName': USERNAME, 'Password': PASSWORD}
            with requests.Session() as sesh:
                sesh.post(login_post_url, data=form_data)
                response = sesh.get(internal_url)
                html = response.text
                break
        except requests.exceptions.ConnectionError:
            print ("Whoops! This is embarrasing :( ")
            print ("Unable to connect to the address. Looks like the website is down.")

    if(sesh):

        #BeautifulSoup version
        soup = BeautifulSoup(html,'lxml')
        table = soup.find_all("table")[3] # Skip the first two tables as there isn't something useful there
        df = pd.read_html(str(table))


        df2 = (tabulate(df[0], headers='keys', tablefmt='psql', showindex=False))

        myFile = open(filename+'.csv', 'w')
        myFile.write(str(df2))

    else:
        print("Oops. Something went wrong :(")
        print("It looks like authentication failed")