大家好,前面给大家分享了关于采集人人网的相关代码的示例,今天给大家带来的也是一个采集人人网的爬虫实例,是通过WebBits和Haskell相结合,比较简洁,也非常易懂,让我们一起来学习一下。
代码语言:javascript
复制
```haskell
import Web.WebBits
import Network.HTTP.Conduit
import Data.Conduit
import Control.Monad.Trans.Resource
import Data.Maybe
import Data.List
import Data.Char
import System.Environment
main = do
proxy <- getProxy
let url = "http://www.renren.com"
response <- fetchUrl proxy url
case response of
Left err -> print err
Right doc -> do
let title = getTitle doc
print title
getProxy = do
proxies <- getProxyList
case proxies of
[] -> error "没有可用的代理服务器"
(x:xs) -> do
let proxy = "http://" ++ x
putStrLn $ "使用代理服务器: " ++ x
return proxy
getProxyList = do
url <- getUrl "https://www.duoip.cn/get_proxy"
let proxies = filter (not . null) $ map (stripPrefix "http://") $ lines url
return proxies
fetchUrl :: String -> String -> IO (Either String (HtmlUrl (Sink IO ())))
fetchUrl proxy url = do
manager <- newManager (tlsManagerSettings )
response <- fetchUrlWith manager url
return response
getTitle :: HtmlUrl (Sink IO ()) -> String
getTitle = fromMaybe "未知" . listToMaybe . concatMap (getTitle') . htmlUrlElements
getTitle' :: HtmlElement -> [String]
getTitle' (Element (Name "title") _ _) = ["标题"]
getTitle' _ = []
getUrl :: String -> IO String
getUrl url = do
response <- simpleHttp url
case response of
Left err -> error $ "请求失败: " ++ show err
Right doc -> return $ unpack $ responseBody doc
```
这个程序我们明显可以看出,不同于之前的代码,首先它会获取一个可用的代理地址,然后使用这个代理地址发送HTTP请求到网站服务器。接着,程序解析HTML内容,提取标题元素,并将其打印出来,除此之外,没有其他多余的操作。我们在实际使用时根据需要进行改动,完全可以直接运用,赶紧试试吧。