摘 要
时代在发展,技术在进步,互联网改变了全世界,各行各业都在这个互联网时代寻求自身的增长点,人们的日常生活也越来越离不开互联网。以二手房为例,线下二手房行业持续遭到冲击,越来越多的年轻人选择在互联网上挑选房源。然而网上信息混杂,数据来源众多,如何提升二手房用户体验就成了一个值得探讨的问题。
我国二手房人口基数庞大,导致在互联网上寻找合适的二手房成了人们不得不面对的问题。网络中提供二手房信息的网站很多,用户寻找房源时往往由于使用习惯等原因主要在一个平台上寻找,若找不到心仪的房源,又将面临辗转各个二手房信息平台,为用户带来不便。笔者认为,对生活有实际意义的软件系统才称得上是一个好系统,结合以上背景,本系统实现一个基于Scrapy框架爬虫技术的无锡市二手房价格爬取分析系统。首先通过python开源爬虫框架scrapy对无锡市二手房价格信息网站进行爬取,依据不同网页的不同特性选择不同的爬取策略,编写爬虫代码,过滤并抽取所需出二手房源信息,建立以城市为区分的房源信息数据库。数据库部分采用非结构化数据库MongoDB,避免网上信息的非结构性对数据存储的影响。然后采用python开源网站搭建框架Django完成对爬取到的二手房信息的web端展示。
关键词:二手房:分布式爬虫:Scrapy:可视化
Abstract
The times are developing, technology is progressing, the Internet has changed the world, all walks of life are seeking their own growth point in this Internet era, people's daily life is more and more inseparable from the Internet. Take second-hand housing as an example, offline second-hand housing industry continues to be hit, more and more young people choose to choose housing on the Internet. However, online information is mixed, data sources are numerous, how to improve the second-hand housing user experience has become a problem worth discussing.
China's second-hand housing population base is huge, leading to the Internet to find the right second-hand housing has become a problem people have to face. There are many websites that provide second-hand housing information in the network. When users look for housing sources, they often look for them mainly on a platform because of their usage habits. If they can not find the desired source of housing, they will face various second-hand housing information platforms. Bring inconvenience to users. According to the author, the software system with practical significance to life can be called a good system. Combined with the above background, this system realizes a second-hand housing price crawling analysis system based on Scrapy frame crawler technology in Wuxi. Through the python of open source crawler framework scrapy Wuxi second-hand housing price information website crawling, according to the different characteristics of different web pages to choose different crawling strategies, write crawler code, filter and extract the required second-hand housing information, Establish a city-specific housing information database. Database part uses unstructured database MongoDB, avoid the influence of unstructured information on data storage. Then use python open source website to build a framework Django complete the crawling of second-hand housing information web end display.
Keywords: second-hand house: distributed crawler: Scrapy:visualization